Skip to content

Zep Vector Store node#

Use the Zep Vector Store to interact with Zep vector databases. You can insert documents into a vector database, get many documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.

On this page, you'll find the node parameters for the Zep Vector Store node, and links to more resources.

Credentials

You can find authentication information for this node here.

Examples and templates

For usage examples and templates to help you get started, refer to n8n's Zep Vector Store integrations page.

Parameter resolution in sub-nodes

Sub-nodes behave differently to other nodes when processing multiple items using an expression.

Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression {{ $json.name }} resolves to each name in turn.

In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression {{ $json.name }} always resolves to the first name.

Node parameters#

Operation Mode#

This Vector Store node has three modes: Get Many, Insert Documents, and Retrieve Documents. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.

Get Many#

In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.

Insert Documents#

Use Insert Documents mode to insert new documents into your vector database.

Retrieve Documents (For Agent/Chain)#

Use Retrieve Documents mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.

Insert Documents parameters#

  • Collection Name: Enter the collection name where the data is stored.

Get Many parameters#

  • Collection Name: Enter the collection name where the data is stored.
  • Prompt: Enter the search query.
  • Limit: Enter how many results to retrieve from the vector store. For example, set this to 10 to get the ten best results.

Retrieve Documents (For Agent/Chain) parameters#

  • Collection Name: Enter the collection name where the data is stored.

Node options#

Embedding Dimensions#

Must be the same when embedding the data and when querying it.

This sets the size of the array of floats used to represent the semantic meaning of a text document.

Read more about Zep embeddings in Zep's embeddings documentation.

Is Auto Embedded#

Available in the Insert Documents Operation Mode, enabled by default.

Disable this to configure your embeddings in Zep instead of in n8n.

Metadata Filter#

Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.

This is an AND query. If you specify more than one metadata filter field, all of them must match.

When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.

Templates and examples#

Ask questions about a PDF using AI

by David Roberts

View template details
Chat with PDF docs using AI (quoting sources)

by David Roberts

View template details
Building Your First WhatsApp Chatbot

by Jimleuk

View template details
Browse Zep Vector Store integration templates, or search all templates

Refer to LangChain's Zep documentation for more information about the service.

View n8n's Advanced AI documentation.

AI glossary#

  • completion: Completions are the responses generated by a model like GPT.
  • hallucinations: Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist.
  • vector database: A vector database stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.
  • vector store: A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.