Skip to main content
Version: 2.19

VertexAIGeminiChatGenerator

VertexAIGeminiChatGenerator enables chat completion using Google Gemini models.

Deprecation Notice

This integration uses the deprecated google-generativeai SDK, which will lose support after August 2025.

We recommend switching to the new GoogleGenAIChatGenerator integration instead.

Most common position in a pipelineAfter a ChatPromptBuilder
Mandatory run variables“messages”: A list of ChatMessage objects representing the chat
Output variables“replies”: A list of alternative replies of the model to the input chat
API referenceGoogle Vertex
GitHub linkhttps://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/google_vertex

VertexAIGeminiGenerator supports gemini-1.5-pro and gemini-1.5-flash/ gemini-2.0-flash models. Note that Google recommends upgrading from gemini-1.5-pro to gemini-2.0-flash.

For available models, see https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models.

note

To explore the full capabilities of Gemini check out this article and the related 🧑‍🍳 Cookbook.

Parameters Overview

VertexAIGeminiChatGenerator uses Google Cloud Application Default Credentials (ADCs) for authentication. For more information on how to set up ADCs, see the official documentation.

Keep in mind that it’s essential to use an account that has access to a project authorized to use Google Vertex AI endpoints.

You can find your project ID in the GCP resource manager or locally by running gcloud projects list in your terminal. For more info on the gcloud CLI, see its official documentation.

Streaming

This Generator supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback init parameter.

Usage

You need to install the google-vertex-haystack package to use the VertexAIGeminiChatGenerator:

shell
pip install google-vertex-haystack

On its own

Basic usage:

python
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiChatGenerator

gemini_chat = VertexAIGeminiChatGenerator()

messages = [ChatMessage.from_user("Tell me the name of a movie")]
res = gemini_chat.run(messages)

print(res["replies"][0].text)

messages += [res["replies"][0], ChatMessage.from_user("Who's the main actor?")]
res = gemini_chat.run(messages)

print(res["replies"][0].text)

When chatting with Gemini Pro, you can also easily use function calls. First, define the function locally and convert into a Tool:

python
from typing import Annotated
from haystack.tools import create_tool_from_function

## example function to get the current weather
def get_current_weather(
location: Annotated[str, "The city for which to get the weather, e.g. 'San Francisco'"] = "Munich",
unit: Annotated[str, "The unit for the temperature, e.g. 'celsius'"] = "celsius",
) -> str:
return f"The weather in {location} is sunny. The temperature is 20 {unit}."

tool = create_tool_from_function(get_current_weather)

Create a new instance of VertexAIGeminiChatGenerator to set the tools and a ToolInvoker to invoke the tools.:

python
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiChatGenerator
from haystack.components.tools import ToolInvoker

gemini_chat = VertexAIGeminiChatGenerator(model="gemini-2.0-flash-exp", tools=[tool])

tool_invoker = ToolInvoker(tools=[tool])

And then ask our question:

python
from haystack.dataclasses import ChatMessage

messages = [ChatMessage.from_user("What is the temperature in celsius in Berlin?")]
res = gemini_chat.run(messages=messages)

print(res["replies"][0].tool_calls)

tool_messages = tool_invoker.run(messages=replies)["tool_messages"]
messages = user_message + replies + tool_messages

messages += res["replies"][0] + [ChatMessage.from_function(content=weather, name="get_current_weather")]

final_replies = gemini_chat.run(messages=messages)["replies"]
print(final_replies[0].text)

In a pipeline

python
from haystack.components.builders import ChatPromptBuilder
from haystack.dataclasses import ChatMessage
from haystack import Pipeline
from haystack_integrations.components.generators.google_vertex import VertexAIGeminiChatGenerator

## no parameter init, we don't use any runtime template variables
prompt_builder = ChatPromptBuilder()
gemini_chat = VertexAIGeminiChatGenerator()

pipe = Pipeline()
pipe.add_component("prompt_builder", prompt_builder)
pipe.add_component("gemini", gemini)
pipe.connect("prompt_builder.prompt", "gemini.messages")

location = "Rome"
messages = [ChatMessage.from_user("Tell me briefly about {{location}} history")]
res = pipe.run(data={"prompt_builder": {"template_variables":{"location": location}, "template": messages}})

print(res)

Additional References

:cook: Cookbook: Function Calling and Multimodal QA with Gemini