Skip to content

Mistral Cloud Chat Model#

Use the Mistral Cloud Chat Model node to combine Mistral Cloud's chat models with conversational agents.

On this page, you'll find the node parameters for the Mistral Cloud Chat Model node, and links to more resources.

Credentials

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Mistral Cloud Chat Model integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node parameters#

Model: the model to use to generate the completion. n8n dynamically loads models from Mistral Cloud and you will only see the models available to your account.

Node options#

  • Maximum Number of Tokens: the completion length, in characters.
  • Sampling Temperature: controls the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Timeout: maximum request time in milliseconds.
  • Max Retries: maximum number of times to retry a request.
  • Top P: use a lower value to ignore less probable options.
  • Enable Safe Mode: enable safe mode by injecting a safety prompt at the beginning of the completion. This helps prevent the model from generating offensive content.
  • Random Seed: seed to use for random sampling. If set, different calls will generate deterministic results.

View example workflows and related content on n8n's website.

Refer to LangChains's Mistral documentation for more information about the service.

View n8n's Advanced AI documentation.