Skip to content

In Memory Vector Store#

Use the In Memory Vector Store node to store and retrieve embeddings in n8n's in-app memory.

On this page, you'll find the node parameters for the In Memory Vector Store node, and links to more resources.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's In Memory Vector Store integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $ }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $ }} always resolves to the first name.

This node is different to AI memory nodes

The in-memory storage described here is different to the AI memory nodes such as Window Buffer Memory.

This node creates a vector database in the app memory.

Node parameters#

Operation Mode#

Vector Store nodes in n8n have three modes: Get Many, Insert Documents and Retrieve Documents. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

Get Many#

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to a chain as additional context.

Insert Documents#

Use insert documents mode to insert new documents into your vector database.

Retrieve Documents (For Agent/Chain)#

Use Retrieve Documents mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Parameters for Get Many:

  • Memory Key: the key to use to store the vector memory in the workflow data. n8n prefixes the key with the workflow ID to avoid collisions.
  • Prompt: search query.
  • Limit: how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Parameters for Insert Documents:

  • Memory Key
  • Clear Store: whether to wipe the vector store for the given memory key for this workflow before inserting data.

Parameters for Retrieve Documents (For Agent/Chain):

  • Memory Key

View example workflows and related content on n8n's website.

Refer to LangChains's Memory Vector Store documentation for more information about the service.

View n8n's Advanced AI documentation.