Skip to content

Summarization Chain#

Use the Summarization Chain node to summarize multiple documents.

On this page, you'll find the node parameters for the Summarization Chain node, and links to more resources.

Node parameters#

Choose the type of data you need to summarize in Data to Summarize. The data type you choose determines the other node parameters.

  • Use Node Input (JSON) and Use Node Input (Binary): summarize the data coming into the node from the workflow.
    • You can configure the Chunking Strategy: choose what strategy to use to define the data chunk sizes.
      • If you choose Simple (Define Below) you can then set Characters Per Chunk and Chunk Overlap (Characters).
      • Choose Advanced if you want to connect a splitter sub-node that provides more configuration options.
  • Use Document Loader: summarize data provided by a document loader sub-node.

Node Options#

You can configure the summarization method and prompts. Select Add Option > Summarization Method and Prompts.

Options in Summarization Method:

  • Map Reduce: this is the recommended option. Learn more about Map Reduce in the LangChain documentation.
  • Refine: learn more about Refine in the LangChain documentation.
  • Stuff: learn more about Stuff in the LangChain documentation.

You can customize the Individual Summary Prompts and the Final Prompt to Combine. There are examples in the node. You must include the "{text}" placeholder.

Templates and examples#

Scrape and summarize webpages with AI

by n8n Team

View template details
Load and summarize Google Drive files with AI

by n8n Team

View template details
AI: Summarize podcast episode and enhance using Wikipedia

by n8n Team

View template details
Browse Summarization Chain integration templates, or search all templates

Refer to LangChain's documentation on summarization for more information about the service.

View n8n's Advanced AI documentation.

  • completion: Completions are the responses generated by a model like GPT.
  • hallucinations: Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist.
  • vector database: A vector database stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.
  • vector store: A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions.