Skip to content

Hugging Face Inference Model#

Use the Hugging Face Inference Model node to use Hugging Face's models.

On this page, you'll find the node parameters for the Hugging Face Inference Model node, and links to more resources.

Credentials

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Hugging Face Inference Model integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node parameters#

Model: the model to use to generate the completion.

Node options#

  • Custom Inference Endpoint: endpoint URL.
  • Frequency Penalty: increase this to reduce the chance of the model repeating itself.
  • Maximum Number of Tokens: the completion length, in characters.
  • Presence Penalty: increase this to increase the chance of the model talking about new topics.
  • Sampling Temperature: controls the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations.
  • Top K: the number of token choices the model uses to generate the next token.
  • Top P: use a lower value to ignore less probable options.

View example workflows and related content on n8n's website.

Refer to LangChains's Hugging Face Inference Model documentation for more information about the service.

View n8n's Advanced AI documentation.