Pinecone Vector Store node#
Use the Pinecone node to interact with your Pinecone database as vector store. You can insert documents into a vector database, get documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain.
On this page, you'll find the node parameters for the Pinecone node, and links to more resources.
Credentials
You can find authentication information for this node here.
Parameter resolution in sub-nodes
Sub-nodes behave differently to other nodes when processing multiple items using an expression.
Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name
values, the expression {{ $json.name }}
resolves to each name in turn.
In sub-nodes, the expression always resolves to the first item. For example, given an input of five name
values, the expression {{ $json.name }}
always resolves to the first name.
Node parameters#
Operation Mode#
This Vector Store node has four modes: Get Many, Insert Documents, Retrieve Documents, and Update Documents. The mode you select determines the operations you can perform with the node and what inputs and outputs are available.
Get Many#
In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context.
Insert Documents#
Use Insert Documents mode to insert new documents into your vector database.
Retrieve Documents (For Agent/Chain)#
Use Retrieve Documents mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node.
Update Documents#
Use Update Documents mode to update documents in a vector database by ID. Fill in the ID with the ID of the embedding entry to update.
Get Many parameters#
- Pinecone Index: Select or enter the Pinecone Index to use.
- Prompt: Enter your search query.
- Limit: Enter how many results to retrieve from the vector store. For example, set this to
10
to get the ten best results.
Insert Documents parameters#
- Pinecone Index: Select or enter the Pinecone Index to use.
Retrieve Documents (For Agent/Chain) parameters#
- Pinecone Index: Select or enter the Pinecone Index to use.
Node options#
Pinecone Namespace#
Another segregation option for how to store your data within the index.
Metadata Filter#
Available in Get Many mode. When searching for data, use this to match with metadata associated with the document.
This is an AND
query. If you specify more than one metadata filter field, all of them must match.
When inserting data, the metadata is set using the document loader. Refer to Default Data Loader for more information on loading documents.
Clear Namespace#
Available in Insert Documents mode. Deletes all data from the namespace before inserting the new data.
Templates and examples#
Related resources#
Refer to LangChain's Pinecone documentation for more information about the service.
View n8n's Advanced AI documentation.
Find your Pinecone index and namespace#
Your Pinecone index and namespace are available in your Pinecone account.
AI glossary#
- completion: Completions are the responses generated by a model like GPT.
- hallucinations: Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist.
- vector database: A vector database stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.
- vector store: A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.