Skip to content

Embeddings OpenAI node#

Use the Embeddings OpenAI node to generate embeddings for a given text.

On this page, you'll find the node parameters for the Embeddings OpenAI node, and links to more resources.

Credentials

You can find authentication information for this node here.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node options#

  • Model: Select the model to use for generating embeddings.
  • Base URL: Enter the URL to send the request to. Use this if you are using a self-hosted OpenAI-like model.
  • Batch Size: Enter the maximum number of documents to send in each request.
  • Strip New Lines: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default.
  • Timeout: Enter the maximum amount of time a request can take in seconds. Set to -1 for no timeout.

Templates and examples#

Ask questions about a PDF using AI

by David Roberts

View template details
Chat with PDF docs using AI (quoting sources)

by David Roberts

View template details
Building Your First WhatsApp Chatbot

by Jimleuk

View template details
Browse Embeddings OpenAI integration templates, or search all templates

Refer to LangChains's OpenAI embeddings documentation for more information about the service.

View n8n's Advanced AI documentation.

AI glossary#

  • completion: Completions are the responses generated by a model like GPT.
  • hallucinations: Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist.
  • vector database: A vector database stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.
  • vector store: A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.