MistralChatGenerator
This component enables chat completion using Mistral’s text generation models.
| Most common position in a pipeline | After a ChatPromptBuilder |
| Mandatory init variables | "api_key": The Mistral API key. Can be set with MISTRAL_API_KEY env var. |
| Mandatory run variables | “messages” A list of ChatMessage objects |
| Output variables | "replies": A list of ChatMessage objects ”meta”: A list of dictionaries with the metadata associated with each reply, such as token count, finish reason, and so on |
| API reference | Mistral |
| GitHub link | https://github.com/deepset-ai/haystack-core-integrations/tree/main/integrations/mistral |
Overview
This integration supports Mistral’s models provided through the generative endpoint. For a full list of available models, check out the Mistral documentation.
MistralChatGenerator needs a Mistral API key to work. You can write this key in:
- The
api_keyinit parameter using Secret API - The
MISTRAL_API_KEYenvironment variable (recommended)
Currently, available models are:
mistral-tiny(default)mistral-smallmistral-medium(soon to be deprecated)mistral-large-latestcodestral-latest
This component needs a list of ChatMessage objects to operate. ChatMessage is a data class that contains a message, a role (who generated the message, such as user, assistant, system, function), and optional metadata.
Refer to the Mistral API documentation for more details on the parameters supported by the Mistral API, which you can provide with generation_kwargs when running the component.
Tool Support
MistralChatGenerator supports function calling through the tools parameter, which accepts flexible tool configurations:
- A list of Tool objects: Pass individual tools as a list
- A single Toolset: Pass an entire Toolset directly
- Mixed Tools and Toolsets: Combine multiple Toolsets with standalone tools in a single list
This allows you to organize related tools into logical groups while also including standalone tools as needed.
from haystack.tools import Tool, Toolset
from haystack_integrations.components.generators.mistral import MistralChatGenerator
# Create individual tools
weather_tool = Tool(name="weather", description="Get weather info", ...)
news_tool = Tool(name="news", description="Get latest news", ...)
# Group related tools into a toolset
math_toolset = Toolset([add_tool, subtract_tool, multiply_tool])
# Pass mixed tools and toolsets to the generator
generator = MistralChatGenerator(
tools=[math_toolset, weather_tool, news_tool] # Mix of Toolset and Tool objects
)
For more details on working with tools, see the Tool and Toolset documentation.
Streaming
This Generator supports streaming the tokens from the LLM directly in output. To do so, pass a function to the streaming_callback init parameter.
Usage
Install the mistral-haystack package to use the MistralChatGenerator:
On its own
from haystack_integrations.components.generators.mistral import MistralChatGenerator
from haystack.components.generators.utils import print_streaming_chunk
from haystack.dataclasses import ChatMessage
from haystack.utils import Secret
generator = MistralChatGenerator(api_key=Secret.from_env_var("MISTRAL_API_KEY"), streaming_callback=print_streaming_chunk)
message = ChatMessage.from_user("What's Natural Language Processing? Be brief.")
print(generator.run([message]))
In a Pipeline
Below is an example RAG Pipeline where we answer questions based on the URL contents. We add the contents of the URL into our messages in the ChatPromptBuilder and generate an answer with the MistralChatGenerator.
from haystack import Document
from haystack import Pipeline
from haystack.components.builders import ChatPromptBuilder
from haystack.components.generators.utils import print_streaming_chunk
from haystack.components.fetchers import LinkContentFetcher
from haystack.components.converters import HTMLToDocument
from haystack.dataclasses import ChatMessage
from haystack_integrations.components.generators.mistral import MistralChatGenerator
fetcher = LinkContentFetcher()
converter = HTMLToDocument()
prompt_builder = ChatPromptBuilder(variables=["documents"])
llm = MistralChatGenerator(streaming_callback=print_streaming_chunk, model='mistral-small')
message_template = """Answer the following question based on the contents of the article: {{query}}\n
Article: {{documents[0].content}} \n
"""
messages = [ChatMessage.from_user(message_template)]
rag_pipeline = Pipeline()
rag_pipeline.add_component(name="fetcher", instance=fetcher)
rag_pipeline.add_component(name="converter", instance=converter)
rag_pipeline.add_component("prompt_builder", prompt_builder)
rag_pipeline.add_component("llm", llm)
rag_pipeline.connect("fetcher.streams", "converter.sources")
rag_pipeline.connect("converter.documents", "prompt_builder.documents")
rag_pipeline.connect("prompt_builder.prompt", "llm.messages")
question = "What are the capabilities of Mixtral?"
result = rag_pipeline.run(
{
"fetcher": {"urls": ["https://mistral.ai/news/mixtral-of-experts"]},
"prompt_builder": {"template_variables": {"query": question}, "template": messages},
"llm": {"generation_kwargs": {"max_tokens": 165}},
},
)
Additional References
:cook: Cookbook: Web QA with Mixtral-8x7B-Instruct-v0.1