# n8n Docs > Documentation for n8n, a workflow automation platform. Documentation for n8n, a workflow automation platform. This file helps LLMs understand and use the documentation more effectively. # All documentation # Welcome to n8n Docs This is the documentation for [n8n](https://n8n.io/), a [fair-code](https://faircode.io) licensed workflow automation tool that combines AI capabilities with business process automation. It covers everything from setup to usage and development. It's a work in progress and all [contributions](help-community/contributing/) are welcome. ## Where to start - **Quickstarts** Jump in with n8n's quickstart guides. [Try it out](try-it-out/) - **Choose the right n8n for you** Cloud, npm, self-host . . . [Options](choose-n8n/) - **Explore integrations** Browse n8n's integrations library. [Find your apps](integrations/) - **Build AI functionality** n8n supports building AI functionality and tools. [Advanced AI](advanced-ai/) ## About n8n n8n (pronounced n-eight-n) helps you to connect any app with an API with any other, and manipulate its data with little or no code. - Customizable: highly flexible workflows and the option to build custom nodes. - Convenient: use the npm or Docker to try out n8n, or the Cloud hosting option if you want us to handle the infrastructure. - Privacy-focused: self-host n8n for privacy and security. # n8n v1.0 migration guide This document provides a summary of what you should be aware of before updating to version 1.0 of n8n. The release of n8n 1.0 marks a milestone in n8n's journey to make n8n available for demanding production environments. Version 1.0 represents the hard work invested over the last four years to make n8n the most accessible, powerful, and versatile automation tool. n8n 1.0 is now ready for use in production. ## New features ### Python support in the Code node Although JavaScript remains the default language, you can now also select Python as an option in the [Code node](../code/code-node/) and even make use of [many Python modules](https://pyodide.org/en/stable/usage/packages-in-pyodide.html#packages-in-pyodide). Note that Python is unavailable in Code nodes added to a workflow before v1.0. [PR #4295](https://github.com/n8n-io/n8n/pull/4295), [PR #6209](https://github.com/n8n-io/n8n/pull/6209) ### Execution order n8n 1.0 introduces a new execution order for multi-branch workflows: In multi-branch workflows, n8n needs to determine the order in which to execute nodes on branches. Previously, n8n executed the first node of each branch, then the second of each branch, and so on (breadth-first). The new execution order ensures that each branch executes completely before starting the next one (depth-first). Branches execute based on their position on the canvas, from top to bottom. If two branches are at the same height, the leftmost one executes first. n8n used to execute multi-input nodes as long as they received data on their first input. Nodes connected to the second input of multi-input nodes automatically executed regardless of whether they received data. The new execution order introduced in n8n 1.0 simplifies this behavior: Nodes are now executed only when they receive data, and multi-input nodes require data on at least one of their inputs to execute. Your existing workflows will use the legacy order, while new workflows will execute using the v1 order. You can configure the execution order for each workflow in [workflow settings](../workflows/settings/). [PR #4238](https://github.com/n8n-io/n8n/pull/4238), [PR #6246](https://github.com/n8n-io/n8n/pull/6246), [PR #6507](https://github.com/n8n-io/n8n/pull/6507) ## Deprecations ### MySQL and MariaDB n8n has deprecated support for MySQL and MariaDB as storage backends for n8n. These database systems are used by only a few users, yet they require continuous development and maintenance efforts. n8n recommends migrating to PostgreSQL for better compatibility and long-term support. [PR #6189](https://github.com/n8n-io/n8n/pull/6189) ### EXECUTIONS_PROCESS and "own" mode Previously, you could use the `EXECUTIONS_PROCESS` environment variable to specify whether executions should run in the `main` process or in their `own` processes. This option and `own` mode are now deprecated and will be removed in a future version of n8n. This is because it led to increased code complexity while offering marginal benefits. Starting from n8n 1.0, `main` will be the new default. Note that executions start much faster in `main` mode than in `own` mode. However, if a workflow consumes more memory than is available, it might crash the entire n8n application instead of just the worker thread. To mitigate this, make sure to allocate enough system resources or configure [queue mode](../hosting/scaling/queue-mode/) to distribute executions among multiple workers. [PR #6196](https://github.com/n8n-io/n8n/pull/6196) ## Breaking changes ### Docker #### Permissions change When using Docker-based deployments, the n8n process is now run by the user `node` instead of `root`. This change increases security. If permission errors appear in your n8n container logs when starting n8n, you may need to update the permissions by executing the following command on the Docker host: ``` docker run --rm -it --user root -v ~/.n8n:/home/node/.n8n --entrypoint chown n8nio/base:16 -R node:node /home/node/.n8n ``` #### Image removal We've removed the Debian and RHEL images. If you were using these you need to change the image you use. This shouldn't result in any errors unless you were making a custom image based on one of those images. #### Entrypoint change The entrypoint for the container has changed and you no longer need to specify the n8n command. If you were previously running `n8n worker --concurrency=5` it's now `worker --concurrency=5` [PR #6365](https://github.com/n8n-io/n8n/pull/6365) ### Workflow failures due to expression errors Workflow executions may fail due to syntax or runtime errors in expressions, such as those that reference non-existent nodes. While expressions already throw errors on the frontend, this change ensures that n8n also throws errors on the backend, where they were previously silently ignored. To receive notifications of failing workflows, n8n recommends setting up an "error workflow" under workflow settings. [PR #6352](https://github.com/n8n-io/n8n/pull/6352) ### Mandatory owner account This change makes [User Management](../user-management/) mandatory and removes support for other authentication methods, such as BasicAuth and External JWT. Note that the number of permitted users on [n8n.cloud](https://n8n.cloud/) or custom plans still varies depending on your subscription. [PR #6362](https://github.com/n8n-io/n8n/pull/6362) ### Directory for installing custom nodes n8n will no longer load custom nodes from its global `node_modules` directory. Instead, you must install (or link) them to `~/.n8n/custom` (or a directory defined by `N8N_CUSTOM_EXTENSIONS`). Custom nodes that are npm packages will be located in `~/.n8n/nodes`. If you have custom nodes that were linked using `npm link` into the global `node_modules` directory, you need to link them again, into `~/.n8n/nodes` instead. [PR #6396](https://github.com/n8n-io/n8n/pull/6396) ### WebSockets The `N8N_PUSH_BACKEND` environment variable can be used to configure one of two available methods for pushing updates to the user interface: `sse` and `websocket`. Starting with n8n 1.0, `websocket` is the default method. [PR #6196](https://github.com/n8n-io/n8n/pull/6196) ### Date transformation functions n8n provides various transformation functions that operate on dates. These functions may return either a JavaScript `Date` or a Luxon `DateTime` object. With the new behavior, the return type always matches the input. If you call a date transformation function on a `Date`, it returns a `Date`. Similarly, if you call it on a `DateTime` object, it returns a `DateTime` object. To identify any workflows and nodes that might be impacted by this change, you can use this [utility workflow](https://n8n.io/workflows/1929-v1-helper-find-params-with-affected-expressions/). For more information about date transformation functions, please refer to the [official documentation](../data/expression-reference/). [PR #6435](https://github.com/n8n-io/n8n/pull/6435) ### Execution data retention Starting from n8n 1.0, all successful, failed, and manual workflow executions will be saved by default. These settings can be modified for each workflow under "Workflow Settings," or globally using the respective environment variables. Additionally, the `EXECUTIONS_DATA_PRUNE` setting will be enabled by default, with `EXECUTIONS_DATA_PRUNE_MAX_COUNT` set to 10,000. These default settings are designed to prevent performance degradation when using SQLite. Make sure to configure them according to your individual requirements and system capacity. [PR #6577](https://github.com/n8n-io/n8n/pull/6577) ### Removed N8N_USE_DEPRECATED_REQUEST_LIB The legacy `request` library has been deprecated for some time now. As of n8n 1.0, the ability to fall back to it in the HTTP Request node by setting the `N8N_USE_DEPRECATED_REQUEST_LIB` environment variable has been fully removed. The HTTP Request node will now always use the new `HttpRequest` interface. If you build custom nodes, refer to [HTTP request helpers](../integrations/creating-nodes/build/reference/http-helpers/) for more information on migrating to the new interface. [PR #6413](https://github.com/n8n-io/n8n/pull/6413) ### Removed WEBHOOK_TUNNEL_URL As of version 0.227.0, n8n has renamed the `WEBHOOK_TUNNEL_URL` configuration option to `WEBHOOK_URL`. In n8n 1.0, `WEBHOOK_TUNNEL_URL` has been removed. Update your setup to reflect the new name. For more information about this configuration option, refer to [the docs](../hosting/configuration/configuration-examples/webhook-url/). [PR #1408](https://github.com/n8n-io/n8n/pull/1408) ### Remove Node 16 support n8n now requires Node 18.17.0 or above. ## Updating to n8n 1.0 1. Create a full backup of n8n. 1. n8n recommends updating to the latest n8n 0.x release before updating to n8n 1.x. This will allow you to pinpoint any potential issues to the correct release. Once you have verified that n8n 0.x starts up without any issues, proceed to the next step. 1. Carefully read the [Deprecations](#deprecations) and [Breaking Changes](#breaking-changes) sections above to assess how they may affect your setup. 1. Update to n8n 1.0: - During beta (before July 24th 2023): If using Docker, pull the `next` Docker image. - After July 24th 2023: If using Docker, pull the `latest` Docker image. 1. If you encounter any issues, redeploy the previous n8n version and restore the backup. ## Reporting issues If you encounter any issues during the process of updating to n8n 1.0, please seek help in the community [forum](https://community.n8n.io/). ## Thank you We would like to take a moment to express our gratitude to all of our users for their continued support and feedback. Your contributions are invaluable in helping us make n8n the best possible automation tool. We're excited to continue working with you as we move forward with the release of version 1.0 and beyond. Thank you for being a part of our journey! # n8n v2.0 breaking changes n8n v2.0 has been released, and with it came some important changes. This document highlights breaking changes and actions you should take to prepare for the transition. These updates improve security, simplify configuration, and remove legacy features. The release of n8n 2.0 continues n8n's commitment to providing a secure, reliable, and production-ready automation platform. This major version includes important security enhancements and cleanup of deprecated features. ## Behavior changes ### Return expected sub-workflow data when the sub-workflow resumes from waiting (waiting for webhook, forms, HITL, etc.) Previously, when an execution (parent) called a sub-execution (child) that contained a node that causes the sub-execution to enter the waiting state and the parent-execution is set up to wait for the sub-execution's completion, the parent-execution would receive incorrect results. Entering the waiting state would happen for example if the sub-execution contains a Wait node with a timeout higher than 65 seconds or a webhook call or a form submission, or a human-in-the-loop node, like the slack node. Parent-Workflow: Sub-Workflow: v1: The parent-execution reproduces the sub-execution's input as its output.: v2: The parent execution receives the result of the child execution: This allows using human-in-the-loop nodes in the sub-workflow and use the results (for example approving or declining an action) in the parent-workflow. **Migration path:** Review any workflows that call sub-workflows and expect to receive the input to the sub-workflow. Update these workflows to handle the new behavior, where the parent-workflow receives the output from the end of the child-workflow instead. ### Start node removed The Start node is no longer supported. This node was the original way to begin workflows but more specific trigger nodes now replace it. **Migration path:** Replace the Start node based on how you use your workflow: - **Manual executions:** Replace the Start node with a [Manual Trigger](../integrations/builtin/core-nodes/n8n-nodes-base.manualworkflowtrigger/) node. - **Sub-workflows:** If another workflow calls this workflow as a sub-workflow, replace the Start node with an [Execute Workflow Trigger](../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflowtrigger/) node and activate the workflow. - **Disabled Start nodes:** If the Start node is disabled, delete it from the workflow. ### Saving and publishing workflows The new workflow publishing system replaces the previous active/inactive toggle. This means that the old "Activate/Deactivate" toggles become the new "Publish/Unpublish" buttons. This change gives you better control over when your workflow changes go live, reducing the risk of accidentally deploying work-in-progress changes to production. More information can be found here: [Saving and publishing workflows.](../workflows/publish/) ### Removed nodes for retired services The following nodes have been removed because the external services they connect to are no longer available: - Spontit node - crowd.dev node - Kitemaker node - Automizy node **Migration path:** If your workflows use any of these nodes, update or remove those workflows to avoid errors. ## Security ### Block environment variable access from Code Node by default To improve security, n8n will block access to environment variables from the Code node by default. The default value for `N8N_BLOCK_ENV_ACCESS_IN_NODE` is now set to `true`. **Migration path:** If your workflows require access to environment variables in Code nodes, set `N8N_BLOCK_ENV_ACCESS_IN_NODE=false` in your environment configuration. For sensitive data, use credentials or other secure methods instead of environment variables. ### Enforce settings file permissions n8n will require strict file permissions for configuration files to improve security. By default, configuration files must use `0600` permissions, which means only the file owner can read and write them. This approach is similar to how SSH protects private keys. **Migration path:** To test this behavior before v2.0, set `N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true`. If your environment doesn't support file permissions (for example, on Windows), set `N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=false` to disable this requirement. ### Enable task runners by default n8n will enable [task runners](../hosting/configuration/task-runners/) by default to improve security and isolation. All Code node executions will run on task runners. **Migration path:** Before upgrading to v2.0, set N8N_RUNNERS_ENABLED=true to test this behavior. Make sure your infrastructure meets the requirements for running task runners. For additional security, consider using [external mode](../hosting/configuration/task-runners/#external-mode). ### Remove task runner from `n8nio/n8n` docker image Starting with v2.0, the main `n8nio/n8n` Docker image will no longer include the task runner for external mode. You must use the separate `n8nio/runners` Docker image to run task runners in external mode. **Migration path:** If you run task runners in Docker with external mode, update your setup to use the `n8nio/runners` image instead of `n8nio/n8n`. ### Remove Pyodide-based Python Code node and tool n8n will remove the Pyodide-based Python Code node and tool and replace them with a [task runner-based](../hosting/configuration/task-runners/) implementation that uses native Python for better security and performance. Starting in v2.0, you can only use Python Code nodes with task runners in [external mode](../hosting/configuration/task-runners/#external-mode) and native Python tools. The native Python Code node doesn't support built-in variables like `_input` or dot access notation, which were available in the Pyodide-based version. For details, see the [Code node documentation](../integrations/builtin/core-nodes/n8n-nodes-base.code/#python-native). The native Python tool supports `_query` for the input string that the AI Agent passes to the tool when it calls it. **Migration path:** To continue using Python in Code nodes, set up task runners in external mode and review your existing Python Code nodes and tools for compatibility. ### Disable ExecuteCommand and LocalFileTrigger nodes by default n8n will disable the `ExecuteCommand` and `LocalFileTrigger` nodes by default because they pose security risks. These nodes allow users to run arbitrary commands and access the file system. **Migration path:** If you need to use these nodes, remove them from the disabled nodes list in your n8n configuration by updating the `NODES_EXCLUDE` environment variable. For example, set `NODES_EXCLUDE="[]"` to enable all nodes, or remove only the specific nodes you need. ### Require authentication on OAuth callback URLs by default n8n will require authentication for OAuth callback endpoints by default. The default value for `N8N_SKIP_AUTH_ON_OAUTH_CALLBACK` will change from `true` (no authentication required) to `false` (authentication required). **Migration path:** Before upgrading to v2.0, set `N8N_SKIP_AUTH_ON_OAUTH_CALLBACK=false` and test your OAuth integrations to ensure they work with authentication enabled. ### Set default value for N8N_RESTRICT_FILE_ACCESS_TO n8n will set a default value for `N8N_RESTRICT_FILE_ACCESS_TO` to control where file operations can occur. This affects the `ReadWriteFile` and `ReadBinaryFiles` nodes. By default, these nodes can only access files in the `~/.n8n-files` directory. **Migration path:** Review your workflows that use file nodes and make sure they only access files in the allowed directory. If you need to allow access to other directories, set the `N8N_RESTRICT_FILE_ACCESS_TO` environment variable to your desired path. ### Change the default value of N8N_GIT_NODE_DISABLE_BARE_REPOS to true By default, the Git node will now block bare repositories for security reasons. The default value for `N8N_GIT_NODE_DISABLE_BARE_REPOS` is set to `true`, which means bare repositories are disabled unless you change this setting. **Migration path:** If your workflows need to use bare repositories, set `N8N_GIT_NODE_DISABLE_BARE_REPOS=false` in your environment configuration to enable them. ## Data ### Drop MySQL/MariaDB support n8n will no longer support MySQL and MariaDB as storage backends. This support was deprecated in v1.0. For best compatibility and long-term support, use PostgreSQL. MySQL node will continue to be supported as before. **Migration path:** Before upgrading to v2.0, use the database migration tool to move your data from MySQL or MariaDB to PostgreSQL or SQLite. ### Remove SQLite legacy driver n8n will remove the legacy SQLite driver due to reliability issues. The pooling driver will become the default and only SQLite driver. The pooling driver uses WAL mode, a single write connection, and a pool of read connections. Our benchmarks show it can be up to 10 times faster. **Migration path:** The `sqlite-pooled` driver will become the default automatically. You can enable pooling now by setting `DB_SQLITE_POOL_SIZE` to a value greater than `0`. The default pool size will be set to `2`. ### Remove in-memory binary data mode n8n will remove the `default` mode for `N8N_DEFAULT_BINARY_DATA_MODE`, which keeps execution binary data in memory during execution. For better performance and stability the following options will be available starting from v2: - `filesystem`: Binary data is stored in the filesystem. Default option in regular mode. - `database`: Binary data is stored in the database. Default option in queue mode. - `s3`: Binary data is stored in S3 compatible store. The `N8N_AVAILABLE_BINARY_DATA_MODES` setting will also be removed, so the mode is now determined only by `N8N_DEFAULT_BINARY_DATA_MODE`. **Migration path:** Filesystem or database mode will be used automatically based on configuration. Make sure your n8n instance has enough disk space to store binary data. For details, see the [binary data configuration](../hosting/configuration/environment-variables/binary-data/). ## Configuration & Environment ### Upgrade dotenv n8n loads environment configuration from a `.env` file using the `dotenv` library. The library will be upgraded from version 8.6.0 to the latest version, which may change how `.env` files are parsed. Key breaking changes include: - Backtick support ([#615](https://github.com/motdotla/dotenv/pull/615)): If your values contain backticks, wrap them in single or double quotes. - Multiline support: You can now use multiline values. - `#` marks the beginning of a comment: Lines starting with # are treated as comments. **Migration path:** Review the [dotenv changelog](https://github.com/motdotla/dotenv/blob/master/CHANGELOG.md) and update your `.env` file to ensure compatibility with the new version. ### Remove `n8n --tunnel` option The `n8n --tunnel` command-line option will be removed in v2.0. **Migration path:** If you currently use the `--tunnel` option for development or testing, switch to an alternative tunneling solution such as ngrok, localtunnel, or Cloudflare Tunnel. Update your workflow and documentation to reflect this change. ### Remove QUEUE_WORKER_MAX_STALLED_COUNT The `QUEUE_WORKER_MAX_STALLED_COUNT` environment variable and the Bull retry mechanism for stalled jobs will be removed because they often caused confusion and didn't work reliably. **Migration path:** Delete this environment variable from your configuration. After upgrading, n8n will no longer automatically retry stalled jobs. If you need to handle stalled jobs, consider implementing your own retry logic or monitoring. ### Remove N8N_CONFIG_FILES The `N8N_CONFIG_FILES` environment variable has been removed. **Migration path:** Delete this environment variable from your configuration. Move configuration into environment variables, an `.env` file or [`_FILE`](../hosting/configuration/configuration-methods/#keeping-sensitive-data-in-separate-files) based configuration. ## CLI & Workflow ### Replace CLI command update:workflow The `update:workflow` CLI command will be deprecated and replaced by two new commands to deliver similar functionality and more clarity: - `publish:workflow` with parameters `id` and `versionId` (optional) - The `--all` parameter will be removed to prevent accidental publishing of workflows in production environments - `unpublish:workflow` with parameters `id` and `all` **Migration path:** Use the new `publish:workflow` command to publish workflows individually by ID, optionally specifying a version. For unpublishing, use the new `unpublish:workflow` command. This provides better clarity and control over workflow publishing states. ## External Hooks ### Deprecated frontend workflow hooks The hooks `workflow.activeChange` and `workflow.activeChangeCurrent` will be deprecated. These will be replaced by a new hook `workflow.published`. The new hook will be triggered when any version of a workflow is published. **Migration path:** Update your code to use the new `workflow.published` hook instead of `workflow.activeChange` and `workflow.activeChangeCurrent`. This hook provides more consistent behavior and will be triggered whenever a workflow version is published. ## Release channels n8n has renamed the release channels from `latest` and `next` to `stable` and `beta`, respectively. The `stable` tag designates the latest stable release, and the `beta` tag designates the latest experimental release. These tags are available on both npm and Docker Hub. For now, n8n will continue to tag releases as `latest` and `next`. These tags will be removed in a future major version. **Recommendation:** Pin your n8n version to a specific version number, for example, `2.0.0`. ## Reporting issues If you run into any problems while updating to n8n 2.0, visit the community [forum](https://community.n8n.io/) for help and support. # Choose your n8n This section contains information on n8n's range of platforms, pricing plans, and licenses. ## Platforms There are different ways to set up n8n depending on how you intend to use it: - [n8n Cloud](../manage-cloud/overview/): hosted solution, no need to install anything. - [Self-host](../hosting/): recommended method for production or customized use cases. - [npm](../hosting/installation/npm/) - [Docker](../hosting/installation/docker/) - [Server setup guides](../hosting/installation/server-setups/) for popular platforms - [OEM deployment](../hosting/oem-deployment/): Surface n8n's interface inside your own product's UI. Requires an OEM agreement - [contact n8n](mailto:license@n8n.io) for details. Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). ## Licenses n8n's [Sustainable Use License](https://github.com/n8n-io/n8n/blob/master/LICENSE.md) and [n8n Enterprise License](https://github.com/n8n-io/n8n/blob/master/LICENSE_EE.md) are based on the [fair-code](https://faircode.io/) model. For a detailed explanation of the license, refer to [Sustainable Use License](../sustainable-use-license/). ## Free versions n8n offers the following free options: - A free trial of Cloud - A free self-hosted community edition for self-hosted users ## Paid versions n8n has two paid versions: - n8n Cloud: choose from a range of paid plans to suit your usage and feature needs. - Self-hosted: there are both free and paid versions of self-hosted. For details of the Cloud plans and contact details for Enterprise Self-hosted, refer to [Pricing](https://n8n.io/pricing/) on the n8n website. # External secrets Feature availability - External secrets are available on Enterprise Self-hosted and Enterprise Cloud plans. - n8n supports the following secret providers: 1Password (via [Connect Server](https://developer.1password.com/docs/connect/get-started/)), AWS Secrets Manager, Azure Key Vault, GCP Secrets Manager, and HashiCorp Vault. - From n8n version 2.10.0 you can connect multiple vaults per secret provider. Older versions only support one vault per provider. - From version `2.13.0`, if enabled, project editors can use external secrets within their projects, and project admins can also manage project vaults. - n8n doesn't support [HashiCorp Vault Secrets](https://developer.hashicorp.com/hcp/docs/vault-secrets). Infisical deprecation Infisical is deprecated. From version 2.10.0, you can't connect new Infisical vaults. Existing ones remain for now. You can use an external secrets store to manage [credentials](../glossary/#credential-n8n) for n8n. n8n stores all credentials encrypted in its database, and restricts access to them by default. With the external secrets feature, you can store sensitive credential information in an external vault, and have n8n load it in when required. This provides an extra layer of security and allows you to manage credentials used across multiple [n8n environments](../source-control-environments/) in one central place. ## Global vaults By default, a secrets vault is **global**: users across the instance can use credentials that reference secrets from that vault. In personal projects, only instance owners and admins can use secrets from global vaults in credentials. ## Project vaults Instance admins can share a vault with a specific [project](../user-management/rbac/projects/). Once you assign a vault to a project, only that project’s credentials can reference its secrets. You can choose to tie a vault to a single project or keep it global. To change the vault scope: 1. In n8n, go to **Settings** > **External Secrets**. 1. Find the vault you want to configure and select **Edit**. 1. Under **Share**, choose one of the following: - **Global**: Share this vault across your entire n8n instance. This allows credentials across the instance to reference these secrets. - **Project**: Restrict this vault to a specific project. Choosing a project limits secret access to only that project's credentials. 1. **Save** your configuration. ## Connect n8n to your secrets store Secret values n8n only supports plaintext values for secrets, not JSON objects. 1. In n8n, go to **Settings** > **External Secrets**. 1. Click **Add secrets vault**. 1. Enter a unique name for your vault. This will be the first segment when referencing this vault in a `{{ $secrets.... }}` expression in a credential. 1. Select one of the supported secret providers. 1. Enter the credentials for your provider. Refer to the provider-specific sections below for details. 1. **Save** your configuration. As long as this store is connected, you can reference its secrets in credentials. ### 1Password 1Password Connect Server required n8n integrates with [1Password Connect Server](https://developer.1password.com/docs/connect/get-started/), a self-hosted API for machine access to 1Password. This isn't the same as a personal or team 1Password account. You must deploy and run a Connect Server to use this provider. Provide your **Connect Server URL** and **Access Token**. The Connect Server URL is the address where your server is accessible (for example, `http://localhost:8080`). The Access Token is the token you created for the Connect Server integration. n8n reads all vaults and items accessible to the token. Each 1Password item becomes a secret, with the item's fields accessible as properties. Use `{{ $secrets... }}` to access a specific field value. ### AWS Secrets Manager Provide your **access key ID**, **secret access key**, and **region**. The IAM user must have the `secretsmanager:ListSecrets`, `secretsmanager:BatchGetSecretValue`, and `secretsmanager:GetSecretValue` permissions. To give n8n access to all secrets in your AWS Secrets Manager, you can attach the following policy to the IAM user: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "AccessAllSecrets", "Effect": "Allow", "Action": [ "secretsmanager:ListSecrets", "secretsmanager:BatchGetSecretValue", "secretsmanager:GetResourcePolicy", "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret", "secretsmanager:ListSecretVersionIds" ], "Resource": "*" } ] } ``` You can also be more restrictive and give n8n access to select specific AWS Secret Manager secrets. You still need to allow the `secretsmanager:ListSecrets` and `secretsmanager:BatchGetSecretValue` permissions to access all resources. These permissions allow n8n to retrieve ARN-scoped secrets, but don't provide access to the secret values. Next, you need set the scope for the `secretsmanager:GetSecretValue` permission to the specific Amazon Resource Names (ARNs) for the secrets you wish to share with n8n. Ensure you use the correct region and account ID in each resource ARNs. You can find the ARN details in the AWS dashboard for your secrets. For example, the following IAM policy only allows access to secrets with a name starting with `n8n` in your specified AWS account and region: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "ListingSecrets", "Effect": "Allow", "Action": [ "secretsmanager:ListSecrets", "secretsmanager:BatchGetSecretValue" ], "Resource": "*" }, { "Sid": "RetrievingSecrets", "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": [ "arn:aws:secretsmanager:us-west-2:123456789000:secret:n8n*" ] } ] } ``` For more IAM permission policy examples, consult the [AWS documentation](https://docs.aws.amazon.com/secretsmanager/latest/userguide/auth-and-access_iam-policies.html#auth-and-access_examples_batch). ### Azure Key Vault Provide your **vault name**, **tenant ID**, **client ID**, and **client secret**. Refer to the Azure documentation to [register a Microsoft Entra ID app and create a service principal](https://learn.microsoft.com/en-us/entra/identity-platform/howto-create-service-principal-portal). n8n supports only single-line values for secrets. ### GCP Secrets Manager Provide a **Service Account Key** (JSON) for a service account that has at least these roles: `Secret Manager Secret Accessor` and `Secret Manager Secret Viewer`. Refer to Google's [service account documentation](https://cloud.google.com/iam/docs/service-account-overview) for more information. ### HashiCorp Vault Provide the **Vault URL** for your vault instance, and select your **Authentication Method**. Enter your authentication details. Optionally provide a namespace. - Refer to the HashiCorp documentation for your authentication method: - [Token auth method](https://developer.hashicorp.com/vault/docs/auth/token) - [AppRole auth method](https://developer.hashicorp.com/vault/docs/auth/approle) - [Userpass auth method](https://developer.hashicorp.com/vault/docs/auth/userpass) - If you use vault namespaces, you can enter the namespace n8n should connect to. Refer to [Vault Enterprise namespaces](https://developer.hashicorp.com/vault/docs/enterprise/namespaces) for more information on HashiCorp Vault namespaces. #### Manual KV mount configuration By default, n8n autodiscovers KV secret engines by reading `sys/mounts`. If your Vault token doesn't have access to `sys/mounts`, you can manually specify the KV engine mount path and version instead: - **KV Mount Path**: The mount path of your KV secret engine (for example, `secret/`). When set, n8n skips `sys/mounts` autodiscovery and uses this path directly. Leave blank to use autodiscovery. - **KV Version**: The KV engine version (`v1` or `v2`). Defaults to `v2`. Only applies when you specify a **KV Mount Path**. Your Vault token still needs read and list access to the KV path itself. The following example shows a minimal Vault policy for a KV v2 mount at `secret/`: ``` # Read and list secrets at the "secret/" KV v2 mount path "secret/data/*" { capabilities = ["read"] } path "secret/metadata/*" { capabilities = ["read", "list"] } ``` For KV v1, you only need a single policy path: ``` # Read and list secrets at the "kv/" KV v1 mount path "kv/*" { capabilities = ["read", "list"] } ``` ## Use secrets in n8n credentials To use a secret from your store in an n8n credential: 1. Create a new credential, or open an existing one. 1. On the field where you want to use a secret: 1. Hover over the field. 1. Select **Expression**. 1. In the field where you want to use a secret, enter an [expression](../glossary/#expression-n8n) referencing the secret name: ``` {{ $secrets.. }} ``` `` is the one you entered when you added the store. Replace `` with the name as it appears in your vault. ## Using external secrets with n8n environments n8n's [Source control and environments](../source-control-environments/) feature allows you to create different n8n environments, backed by Git. The feature doesn't support using different credentials in different instances. You can use an external secrets vault to provide different credentials for different environments by connecting each n8n instance to a different vault or project environment. For example, you have two n8n instances, one for development and one for production. In your secrets provider, create a project with two environments, development and production. Generate a token for each environment of your secrets provider. Use the token for the development environment to connect your development n8n instance, and the token for your production environment to connect your production n8n instance. ## Using external secrets in projects You can share a vault with a project so that only that project's credentials can reference its secrets. Refer to [Project vaults](#project-vaults) for setup steps. Project-scoped vaults are available from version `2.11.0`. ### Access for project roles Version `2.13.0` and later Before version `2.13.0`, using external secrets in an [RBAC project](../user-management/rbac/) required an [instance owner or instance admin](../user-management/account-types/) as a member of the project. From version `2.13.0`, instance owners and admins can grant [project editors](../user-management/rbac/role-types/#project-editor) and [project admins](../user-management/rbac/role-types/#project-admin) access to external secrets. To enable this: 1. Go to **Settings** > **External Secrets**. 1. Turn on **Enable external secrets for project roles**. When enabled, **Project Editors** can: - View available external secret vaults shared with the project (in **Project** > **Settings**). - Use secrets from the project's vaults in credentials. **Project Admins** get the same access, plus they can: - Create new vaults for the project (in **Project** > **Settings**). - Update and delete vaults assigned to the project. Global vault access Global vaults created in **Settings** > **External Secrets** are visible in **Project** > **Settings** but are read-only for project roles. Only instance admins can modify or delete global vaults. ### Custom roles For more fine-grained access control, instance owners and admins can create a [custom project role](../user-management/rbac/custom-roles/). Go to **Settings** > **Project roles** > **Create role**. In the list of permissions, configure: - **Secrets vaults**: Controls vault management (create, view, edit, delete, and sync vaults). - **Secrets**: Controls whether the role can use secrets in credential expressions. Both permissions are independent. For example, a role may need only the **Secrets** permission to use secrets in credentials without managing vaults. Refer to [Secret vault scopes](../user-management/rbac/custom-roles/#secret-vault-scopes) for the full list of available scopes. ## Troubleshooting ### Secrets don't resolve in production Version `2.13.0` and later From version `2.13.0`, project editors and admins with [secrets access enabled](#access-for-project-roles) can use external secrets in their own credentials. The restriction below applies only to older versions or when the opt-in toggle is off. In versions before `2.13.0` (or when **Enable external secrets for project roles** is off), only instance owners and admins can resolve secrets at runtime. If an owner or admin updates another user's credential with a secrets expression, it may appear to work in preview but fail in production. In this case, only use external secrets in credentials owned by an instance owner or admin. #### AI agent AI agents are artificial intelligence systems capable of responding to requests, making decisions, and performing real-world tasks for users. They use large language models (LLMs) to interpret user input and make decisions about how to best process requests using the information and resources they have available. #### AI chain AI chains allow you to interact with large language models (LLMs) and other resources in sequences of calls to components. AI chains in n8n don't use persistent memory, so you can't use them to reference previous context (use AI agents for this). #### AI completion Completions are the responses generated by a model like GPT. #### AI embedding Embeddings are numerical representations of data using vectors. They're used by AI to interpret complex data and relationships by mapping values across many dimensions. Vector databases, or vector stores, are databases designed to store and access embeddings. #### AI groundedness In AI, and specifically in retrieval-augmented generation (RAG) contexts, groundedness and ungroundedness are measures of how much a model's responses accurately reflect source information. The model uses its source documents to generate grounded responses, while ungrounded responses involve speculation or hallucination unsupported by those same sources. #### AI hallucination Hallucination in AI is when an LLM (large language model) mistakenly perceives patterns or objects that don't exist. #### AI reranking Reranking is a technique that refines the order of a list of candidate documents to improve the relevance of search results. Retrieval-Augmented Generation (RAG) and other applications use reranking to prioritize the most relevant information for generation or downstream tasks. #### AI memory In an AI context, memory allows AI tools to persist message context across interactions. This allows you to have a continuing conversations with AI agents, for example, without submitting ongoing context with each message. In n8n, AI agent nodes can use memory, but AI chains can't. #### AI retrieval-augmented generation (RAG) Retrieval-augmented generation, or RAG, is a technique for providing LLMs access to new information from external sources to improve AI responses. RAG systems retrieve relevant documents to ground responses in up-to-date, domain-specific, or proprietary knowledge to supplement their original training data. RAG systems often rely on vector stores to manage and search this external data efficiently. #### AI tool In an AI context, a tool is an add-on resource that the AI can refer to for specific information or functionality when responding to a request. The AI model can use a tool to interact with external systems or complete specific, focused tasks. #### AI vector store A vector store, or vector database, stores mathematical representations of information. Use with embeddings and retrievers to create a database that your AI can access when answering questions. #### API APIs, or application programming interfaces, offer programmatic access to a service's data and functionality. APIs make it easier for software to interact with external systems. They're often offered as an alternative to traditional user-focused interfaces accessed through web browsers or UI. #### canvas (n8n) The canvas is the main interface for building workflows in n8n's editor UI. You use the canvas to add and connect nodes to compose workflows. #### cluster node (n8n) In n8n, cluster nodes are groups of nodes that work together to provide functionality in a workflow. They consist of a root node and one or more sub nodes that extend the node's functionality. #### credential (n8n) In n8n, credentials store authentication information to connect with specific apps and services. After creating credentials with your authentication information (username and password, API key, OAuth secrets, etc.), you can use the associated app node to interact with the service. #### data pinning (n8n) Data pinning allows you to temporarily freeze the output data of a node during workflow development. This allows you to develop workflows with predictable data without making repeated requests to external services. Production workflows ignore pinned data and request new data on each execution. #### editor (n8n) The n8n editor UI allows you to create and manage workflows. The main area is the canvas, where you can compose workflows by adding, configuring, and connecting nodes. The side and top panels allow you to access other areas of the UI like credentials, templates, variables, executions, and more. #### entitlement (n8n) In n8n, entitlements grant n8n instances access to plan-restricted features for a specific period of time. Floating entitlements are a pool of entitlements that you can distribute among various n8n instances. You can re-assign a floating entitlement to transfer its access to a different n8n instance. #### evaluation (n8n) In n8n, evaluation allows you to tag and organize execution history and compare it against new executions. You can use this to understand how your workflow performs over time as you make changes. In particular, this is useful while developing AI-centered workflows. #### expression (n8n) In n8n, expressions allow you to populate node parameters dynamically by executing JavaScript code. Instead of providing a static value, you can use the n8n expression syntax to define the value using data from previous nodes, other workflows, or your n8n environment. #### LangChain LangChain is an AI-development framework used to work with large language models (LLMs). LangChain provides a standardized system for working with a wide variety of models and other resources and linking different components together to build complex applications. #### Large language model (LLM) Large language models, or LLMs, are AI machine learning models designed to excel in natural language processing (NLP) tasks. They're built by training on large amounts of data to develop probabilistic models of language and other data. #### node (n8n) In n8n, nodes are individual components that you compose to create workflows. Nodes define when the workflow should run, allow you to fetch, send, and process data, can define flow control logic, and connect with external services. #### project (n8n) n8n projects allow you to separate workflows, variables, and credentials into separate groups for easier management. Projects make it easier for teams to collaborate by sharing and compartmentalizing related resources. #### root node (n8n) Each n8n cluster node contains a single root nodes that defines the main functionality of the cluster. One or more sub nodes attach to the root node to extend its functionality. #### sub node (n8n) n8n cluster nodes consist of one or more sub nodes connected to a root node. Sub nodes extend the functionality of the root node, providing access to specific services or resources or offering specific types of dedicated processing, like calculator functionality, for example. #### template (n8n) n8n templates are pre-built workflows designed by n8n and community members that you can import into your n8n instance. When using templates, you may need to fill in credentials and adjust the configuration to suit your needs. #### trigger node (n8n) A trigger node is a special node responsible for executing the workflow in response to certain conditions. All production workflows need at least one trigger to determine when the workflow should run. #### workflow (n8n) An n8n workflow is a collection of nodes that automate a process. Workflows begin execution when a trigger condition occurs and execute sequentially to achieve complex tasks. # Insights Insights gives instance owners and admins visibility into how workflows perform over time. This feature consists of three parts: - [**Insights summary banner**](#insights-summary-banner): Shows key metrics about your instance from the last 7 days at the top of the **Overview** space. - [**Insights dashboard**](#insights-dashboard): A more detailed visual breakdown with per-workflow metrics and historical comparisons. - [**Time saved (Workflow ROI)**](#setting-the-time-saved-by-a-workflow): For each workflow, you can choose to set a fixed amount of time saved per workflow, or dynamically calculate time saved based on the execution path taken on a specific workflow. Feature availability The insights summary banner displays activity from the last 7 days for all plans. The insights dashboard is only available on Pro, Business, and Enterprise plans. ## Insights summary banner n8n collects several metrics for both the insights summary banner and dashboard. They include: - Total production executions (not including sub-workflow executions or manual executions) - Total failed production executions - Production execution failure rate - Time saved (when set on at least one or more active workflows) - Run time average (including wait time from any wait nodes) ## Insights dashboard Access the **Insights** section from the side navigation. Each metric from the summary banner is also clickable, taking you to the corresponding chart. The insights dashboard also has a table showing individual insights from each workflow including total production executions, failed production executions, failure rate, time saved, and run time average. ## Insights time periods By default, the insights summary banner and dashboard show a rolling 7 day window with a comparison to the previous period to identify increases or decreases for each metric. On the dashboard, paid plans also display data for other date ranges: - Pro: 7 and 14 days - Business: 24 hours, 7 days, 14 days, 30 days. - Enterprise: 24 hours, 7 days, 14 days, 30 days, 90 days, 6 months, 1 year ## Setting the time saved by a workflow For each workflow, you can track how much time it saves you. This setting helps you calculate how much time automating a process saves over time vs the manual effort to complete the same task or process. Once configured, n8n calculates the amount of time the workflow saves you based on the number of production executions and displays it on the summary banner and insights dashboard. You can choose between two methods for calculating time saved: ### Fixed time saved With fixed time saved, you set a single time value that applies to every production execution of the workflow, regardless of which path the execution takes. To configure fixed time saved: 1. Navigate to the workflow 1. Select the three dots menu in the top right and select **Settings** 1. In the **Estimated time saved** dropdown, select **Fixed** 1. Enter the number of minutes of work each execution saves 1. Save your settings ### Dynamic time saved Dynamic time saved calculates time savings based on the actual execution path taken, accounting for workflows where different execution paths save different amounts of time. To configure dynamic time saved: 1. Navigate to the workflow 1. Select the three dots menu in the top right and select **Settings** 1. In the **Estimated time saved** dropdown, select **Dynamic** 1. Save your settings 1. Add **Time Saved** nodes to your workflow at the points where time is saved 1. For each Time Saved node, configure: - **Time saved**: The amount of time saved (in minutes) - **Calculation mode**: Choose whether to calculate the time saved once for all items in an execution, or per item which will multiply minutes saved by the total number of input items When you use dynamic time saved, n8n adds up the time from all Time Saved nodes that execute during a workflow run to calculate the total time saved for that execution. Subworkflow support Time saved tracking currently only works on parent workflows. Time saved from subworkflows isn't currently supported, with plans to support this in a future release. ## Disable or configure insights metrics collection If you self-host n8n, you can disable or configure insights and metrics collection using [environment variables](../hosting/configuration/environment-variables/insights/). By default, n8n keeps compacted insights data for 365 days (`N8N_INSIGHTS_MAX_AGE_DAYS`). n8n caps retention at 730 days (two years); use a lower number for a shorter window. To turn off insights collection entirely, set `N8N_DISABLED_MODULES=insights` (refer to the [environment variables page](../hosting/configuration/environment-variables/insights/)). Self-host upgrade In older versions, pruning could follow license-driven defaults (commonly 180 days). `N8N_INSIGHTS_MAX_AGE_DAYS` now controls pruning (default 365). Set `N8N_INSIGHTS_MAX_AGE_DAYS=180` if you want a retention window like that previous default. n8n stores recent insights at one-hour granularity, then compacts older data into day-level and week-level summaries. Use [Insights environment variables](../hosting/configuration/environment-variables/insights/) to control how long n8n waits before each compaction step. Raising those thresholds above the defaults keeps finer detail longer. That adds more rows to `insights_by_period` and uses more database space than extending `N8N_INSIGHTS_MAX_AGE_DAYS` alone. Increase `N8N_INSIGHTS_MAX_AGE_DAYS` first if you only need a longer retention window. ## Insights FAQs ### Which executions do n8n use to calculate the values in the insights banner and dashboard? n8n insights only collects data from production executions (for example, those from active workflows triggered on a schedule or a webhook) from the main (parent) workflow. This means that it doesn't count manual (test) executions or executions from sub-workflows or error workflows. ### Does n8n use historic execution data when upgrading to a version with insights? n8n only starts collecting data for insights once you update to the first supported version (1.89.0). This means it only reports on executions from that point forward and you won't see execution data in insights from prior periods. # Keyboard shortcuts and controls n8n provides keyboard shortcuts for some actions. ## Workflow controls - **Ctrl** + **Alt** + **n**: create new workflow - **Ctrl** + **o**: open workflow - **Ctrl** + **s**: save the current workflow - **Ctrl** + **z**: undo - **Ctrl** + **shift** + **z**: redo - **Ctrl** + **Enter**: execute workflow ## Canvas ### Move the canvas - **Ctrl** + **Left Mouse Button** + drag: move node view - **Ctrl** + **Middle mouse button** + drag: move node view - **Space** + drag: move node view - **Middle mouse button** + drag: move node view - Two fingers on a touch screen: move node view ### Canvas zoom - **+** or **=**: zoom in - **-** or **\_**: zoom out - **0**: reset zoom level - **1**: zoom to fit workflow - **Ctrl** + **Mouse wheel**: zoom in/out ### Nodes on the canvas - **Double click** on a node: open the node details - **Ctrl/Cmd** + **Double click** on a sub-workflow node: open the sub-workflow in a new tab - **Ctrl** + **a**: select all nodes - **Ctrl** + **v**: paste nodes - **Shift** + **s**: add sticky note ### With one or more nodes selected in canvas - **ArrowDown**: select sibling node below the current one - **ArrowLeft**: select node left of the current one - **ArrowRight**: select node right of the current one - **ArrowUp**: select sibling node above the current one - **Ctrl** + **c**: copy - **Ctrl** + **x**: cut - **D**: deactivate - **Delete**: delete - **Enter**: open - **F2**: rename - **P**: pin data in node. Refer to [Data pinning](../data/data-pinning/) for more information. - **Shift** + **ArrowLeft**: select all nodes left of the current one - **Shift** + **ArrowRight**: select all nodes right of the current one - **Ctrl/Cmd** + **Shift** + **o** on a sub-workflow node: open the sub-workflow in a new tab ## Node panel - **N**: open the Node Panel - **Enter**: insert selected node into workflow - **Escape**: close Node panel ### Node panel categories - **Enter**: insert node into workflow, collapse/expand category, open subcategory - **ArrowRight**: expand category, open subcategory - **ArrowLeft**: collapse category, close subcategory view ## Within nodes - **=**: in an empty parameter input, this switches to [expressions](../glossary/#expression-n8n) mode. ## Command bar The Command Bar provides quick access to actions and navigation throughout n8n. Access it using **Ctrl/Cmd + K**, or click the magnifying glass icon on the canvas. Commands adapt based on your current view and permissions. - **Workflow actions:** Add nodes, save, test, tidy up, publish/unpublish, duplicate, import/export, archive, delete - **Resource navigation:** Create and open workflows, credentials, data tables, projects; access recent resources - **Execution actions:** Debug, copy, retry, stop, or delete executions - **General navigation:** Access Templates, Variables, Insights, Settings, Help resources, and Documentation This guide outlines a series of tutorials and resources designed to get you started with n8n. It's not necessary to complete all items listed to start using n8n. Use this as a reference to navigate to the most relevant parts of the documentation and other resources according to your needs. ## Join the community n8n has an active community where you can get and offer help. Connect, share, and learn with other n8n users: - [Ask questions](https://community.n8n.io/t/readme-welcome-to-the-n8n-community/44381) and [make feature requests](https://community.n8n.io/c/feature-requests) in the Community Forum. - [Report bugs](https://github.com/n8n-io/n8n/issues) and [contribute](https://github.com/n8n-io/n8n/blob/master/CONTRIBUTING.md) on GitHub. ## Set up your n8n If you don't have an account yet, sign up to a [free trial on n8n Cloud](https://app.n8n.cloud/register) or install n8n's community edition with [Docker](../hosting/installation/docker/) (recommended) or [npm](../hosting/installation/npm/). See [Choose your n8n](../choose-n8n/) for more details. ## Try it out Start with the quickstart guides to help you get up and running with building basic workflows. - [A very quick quickstart](../try-it-out/quickstart/) - [A longer introduction](../try-it-out/tutorial-first-workflow/) - [Build an AI workflow in n8n](../advanced-ai/intro-tutorial/) ## Structured Courses n8n offers two sets of courses. ### Video courses Learn key concepts and n8n features, while building examples as you go. - The [Beginner](https://www.youtube.com/playlist?list=PLlET0GsrLUL59YbxstZE71WszP3pVnZfI) course covers the basics of n8n. - The [Advanced](https://www.youtube.com/playlist?list=PLlET0GsrLUL5bxmx5c1H1Ms_OtOPYZIEG) course covers more complex workflows, more technical nodes, and enterprise features ### Text courses Build more complex workflows while learning key concepts along the way. Earn a badge and an avatar in your community profile. - [Level 1: Beginner Course](https://blog.n8n.io/announcing-the-n8n-certification-course-for-beginners-level-1/) - [Level 2: Intermediate Course](https://blog.n8n.io/announcing-course-level-two/) ## Self-hosting n8n Explore various [self-hosting options in n8n](../hosting/). If you’re not sure where to start, these are two popular options: - [Hosting n8n on DigitalOcean](../hosting/installation/server-setups/digital-ocean/) - [Hosting n8n on Amazon Web Services](../hosting/installation/server-setups/aws/) ## Build a node If you can't find a node for a specific app or a service, you can build a node yourself and share with the community. See what others have built on [npm website](https://www.npmjs.com/search?q=keywords:n8n-community-node-package). - [Build a declarative-style node](../integrations/creating-nodes/build/declarative-style-node/) - [Learn how to build your own n8n nodes (Youtube Video)](https://www.youtube.com/live/OI6zHJ56eW0?si=SMD7L1J5fZ2mf79W) ## Stay updated - Follow new features and bug fixes in the [Release Notes](../release-notes/) - Follow n8n on socials: [Twitter/X](https://twitter.com/n8n_io), [Discord](https://discord.com/invite/vWwMVThRta), [LinkedIn](https://www.linkedin.com/company/n8n/), [YouTube](https://www.youtube.com/@n8n-io) # License Key To enable certain licensed features, you must first activate your license. You can do this either through the UI or by setting environment variables. ## Add a license key using the UI In your n8n instance: 1. Log in as **Admin** or **Owner**. 1. Select **Settings** > **Usage and plan**. 1. Select **Enter activation key**. 1. Paste in your license key. 1. Select **Activate**. ## Add a license key using an environment variables In your n8n configuration, set `N8N_LICENSE_ACTIVATION_KEY` to your license key. If the instance already has an activated license, this variable will have no effect. Refer to [Environment variables](../hosting/configuration/configuration-methods/) to learn more about configuring n8n. ## Allowlist the license server IP addresses n8n uses Cloudflare to host the license server. As the specific IP addresses can change, you need to allowlist the [full range of Cloudflare IP addresses](https://www.cloudflare.com/ips/) to ensure n8n can always reach the license server. # Log streaming Feature availability Log Streaming is available on all Enterprise plans. Log streaming allows you to send events from n8n to your own logging tools. This allows you to manage your n8n monitoring in your own alerting and logging processes. ## Set up log streaming To use log streaming, you have to add a streaming destination. 1. Navigate to **Settings** > **Log Streaming**. 1. Select **Add new destination**. 1. Choose your destination type. n8n opens the **New Event Destination** modal. 1. In the **New Event Destination** modal, enter the configuration information for your event destination. These depend on the type of destination you're using. 1. Select **Events** to choose which events to stream. 1. Select **Save**. Self-hosted users If you self-host n8n, you can configure additional log streaming behavior using [Environment variables](../hosting/configuration/environment-variables/logs/#log-streaming). You can also manage destinations from environment variables, see [Configure log streaming destinations using environment variables](#configure-using-environment-variables). ## Per-process event log files n8n persists each emitted event to a local log file before forwarding it to streaming destinations. The file survives restarts and lets n8n re-emit events that weren't yet delivered. By default, n8n writes the event log to `/n8nEventLog.log`, with a `-worker` or `-webhook-processor` suffix on those processes. When a single n8n process owns the file, this default works as expected. Shared writable filesystems If multiple n8n processes share one writable volume, for example [queue mode](../hosting/scaling/queue-mode/) workers backed by a shared persistent volume on NFS or EFS, they must not write to the same event log file. Concurrent appends from multiple processes can interleave or corrupt the file, leading to recovery failures and lost events. To avoid this, set [`N8N_EVENTBUS_LOGWRITER_LOGFULLPATH`](../hosting/configuration/environment-variables/logs/#log-streaming) on each process to a unique absolute path that ends in `.log`. n8n uses the configured path verbatim and doesn't append a process-type suffix, so your orchestrator owns uniqueness across processes. The companion variable [`N8N_EVENTBUS_LOGWRITER_MAXTOTALMESSAGESPERFILE`](../hosting/configuration/environment-variables/logs/#log-streaming) bounds how many lines n8n parses from a single event log file during recovery, so a corrupted file can't exhaust process memory. Notes: - Default behavior is unchanged when `N8N_EVENTBUS_LOGWRITER_LOGFULLPATH` isn't set. - When the variable is set, n8n doesn't auto-suffix the path. Each process must receive its own value. - If a shared `n8nEventLog-worker.log` file already exists from a previous deployment, quarantine it manually before opting in. n8n doesn't auto-delete legacy files. ## Events The following events are available. You can choose which events to stream in **Settings** > **Log Streaming** > **Events**. - Workflow - Started - Success - Failed - Cancelled - Node executions - Started - Finished - Audit - User login success - User login failed - User signed up - User updated - User deleted - User invited - User invitation accepted - User re-invited - User email failed - User reset requested - User reset - User credentials created - User credentials shared - User credentials updated - User credentials deleted - User API created - User API deleted - User MFA enabled - User MFA disabled - User execution deleted - Execution data revealed - Execution data reveal failed - Package installed - Package updated - Package deleted - Workflow created - Workflow deleted - Workflow updated - Workflow archived - Workflow unarchived - Workflow activated - Workflow deactivated - Workflow version updated - Workflow executed - Workflow waiting - Workflow resumed - Variable created - Variable updated - Variable deleted - External secrets provider settings saved - External secrets provider reloaded - External secrets connection created - External secrets connection updated - External secrets connection deleted - External secrets connection tested - External secrets connection reloaded - Personal publishing restricted enabled - Personal publishing restricted disabled - Personal sharing restricted enabled - Personal sharing restricted disabled - 2FA enforcement enabled - 2FA enforcement disabled - Token exchange succeeded - Token exchange failed - Token exchange embed login - Token exchange embed login failed - Token exchange identity linked - Token exchange user provisioned - Token exchange role updated - Role mapping roles resolved - Role mapping rule created - Role mapping rule updated - Role mapping rule deleted - Role mapping rules bulk deleted - Worker - Started - Stopped - AI node logs - Memory get messages - Memory added message - Output parser parsed - Retriever get relevant documents - Embeddings embedded document - Embeddings embedded query - Document processed - Text splitter split - Tool called - Vector store searched - LLM generated - LLM error - Vector store populated - Vector store updated - Runner - Task requested - Response received - Queue - Job enqueued - Job dequeued - Job completed - Job failed - Job stalled ## Destinations n8n supports three destination types: - A syslog server - A generic webhook - A Sentry client ## Configure using environment variables If you self-host n8n, you can manage log streaming destinations from environment variables instead of the UI. Available from n8n v2.19.0. Set `N8N_LOG_STREAMING_MANAGED_BY_ENV` to `true` and provide your destinations as a JSON array in `N8N_LOG_STREAMING_DESTINATIONS`. n8n reapplies these on every startup and locks the **Log Streaming** UI as read-only. See [Manage instance settings using environment variables](../hosting/configuration/settings-env-vars/) for the full pattern. | Variable | Type | Default | Description | | ---------------------------------- | ----------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_LOG_STREAMING_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage log streaming from environment variables. When `true`, n8n applies the log streaming variables on every startup and locks the matching UI controls. | | `N8N_LOG_STREAMING_DESTINATIONS` | JSON string | - | JSON array of log streaming destinations. Each destination is an object with a `type` of `webhook`, `syslog`, or `sentry`, plus the configuration for that type. | ### Common fields Every destination accepts these fields, regardless of `type`. | Field | Type | Required | Description | | ------------------------ | ----------- | ---------- | ----------------------------------------------------------------------------------- | | `type` | `"webhook"` | `"syslog"` | `"sentry"` | | `label` | string | No | Display name shown in the UI. | | `enabled` | boolean | No | Whether the destination forwards events. | | `subscribedEvents` | string[] | No | Event names or group prefixes to forward (for example `n8n.audit`, `n8n.workflow`). | | `anonymizeAuditMessages` | boolean | No | Strip sensitive payload data from `n8n.audit.*` events. | | `circuitBreaker` | object | No | Failure-protection settings. See [Circuit breaker](#circuit-breaker). | ### Webhook | Field | Type | Required | Default | Description | | ------------------ | ----------------------------------- | -------- | ------- | ----------------------------------------------------------------------------- | | `url` | string (URL) | Yes | - | Endpoint that receives the event payload. | | `method` | `"GET"` | `"POST"` | `"PUT"` | No | | `sendQuery` | boolean | No | `false` | Whether to send query parameters. | | `specifyQuery` | `"keypair"` | `"json"` | No | `"keypair"` | | `queryParameters` | `{ parameters: [{ name, value }] }` | No | - | Query parameters as key/value pairs. Used when `specifyQuery` is `"keypair"`. | | `jsonQuery` | string | No | `""` | Query parameters as a JSON string. Used when `specifyQuery` is `"json"`. | | `sendHeaders` | boolean | No | `false` | Whether to send headers. | | `specifyHeaders` | `"keypair"` | `"json"` | No | `"keypair"` | | `headerParameters` | `{ parameters: [{ name, value }] }` | No | - | Headers as key/value pairs. Used when `specifyHeaders` is `"keypair"`. | | `jsonHeaders` | string | No | `""` | Headers as a JSON string. Used when `specifyHeaders` is `"json"`. | | `options` | object | No | `{}` | Connection-level options. See [Webhook options](#webhook-options). | #### Webhook options | Field | Type | Required | Default | Description | | ------------------------ | ------------------------------------------------- | ------------ | -------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- | | `allowUnauthorizedCerts` | boolean | No | `false` | Ignore SSL certificate validation. | | `queryParameterArrays` | `"repeat"` | `"brackets"` | `"indices"` | No | | `redirect` | `{ redirect: { followRedirects, maxRedirects } }` | No | `{ redirect: {} }` | Follow HTTP redirects. `followRedirects` defaults to `false`; `maxRedirects` defaults to `21`. | | `proxy` | `{ proxy: { protocol, host, port } }` | No | `{ proxy: {} }` | HTTP/HTTPS proxy. `protocol` is `"https"` or `"http"`; `host` defaults to `"127.0.0.1"`; `port` defaults to `9000`. | | `timeout` | integer ≥ 1 (ms) | No | `5000` | Time to wait for the server to start the response before aborting. | | `socket` | `{ keepAlive, maxSockets, maxFreeSockets }` | No | `{ keepAlive: true, maxSockets: 50, maxFreeSockets: 5 }` | Socket pool configuration. `maxSockets` and `maxFreeSockets` are integers ≥ 1. | ``` [ { "type": "webhook", "label": "Audit", "subscribedEvents": ["n8n.audit", "n8n.workflow"], "anonymizeAuditMessages": true, "url": "https://hooks.example.com/n8n", "method": "POST", "sendHeaders": true, "specifyHeaders": "keypair", "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer s3cret" } ] }, "options": { "timeout": 5000, "redirect": { "redirect": { "followRedirects": true, "maxRedirects": 5 } } } } ] ``` ### Syslog | Field | Type | Required | Default | Description | | ---------- | ------- | -------------------------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | `host` | string | Yes | - | Syslog server hostname or IP. | | `port` | number | No | `514` | Syslog server port. | | `protocol` | `"udp"` | `"tcp"` | `"tls"` | No | | `tlsCa` | string | When `protocol` is `"tls"` | `""` | PEM-formatted CA certificate used for the TLS connection. | | `facility` | number | No | `16` | Syslog facility code. Allowed values: `0` (Kernel), `1` (User), `3` (System), `13` (Audit), `14` (Alert), `16` to `23` (Local0 to Local7). | | `app_name` | string | No | `"n8n"` | Value used as the syslog `APP-NAME`. | ``` [ { "type": "syslog", "label": "SIEM", "subscribedEvents": ["n8n.audit", "n8n.workflow"], "host": "syslog.example.com", "port": 514, "protocol": "tls", "tlsCa": "-----BEGIN CERTIFICATE-----\n…\n-----END CERTIFICATE-----", "facility": 16, "app_name": "n8n" } ] ``` ### Sentry | Field | Type | Required | Default | Description | | ----- | ------------ | -------- | ------- | ---------------------- | | `dsn` | string (URL) | Yes | - | Sentry DSN client key. | ``` [ { "type": "sentry", "label": "Sentry prod", "subscribedEvents": ["n8n.workflow"], "dsn": "https://public@sentry.example.com/1" } ] ``` ### Circuit breaker A circuit breaker temporarily stops delivery to a destination after repeated failures, preventing load on a struggling downstream service. Available on every destination type. | Field | Type | Required | Default | Description | | --------------- | ------------------ | -------- | ------- | ----------------------------------------------------------------------------------------------------- | | `maxFailures` | integer ≥ 1 | No | `5` | n8n stops sending requests to the destination once failures within `failureWindow` reach this number. | | `failureWindow` | integer ≥ 100 (ms) | No | `60000` | Sliding window for counting failures. Older failures expire. | ``` { "circuitBreaker": { "maxFailures": 3, "failureWindow": 30000 } } ``` ### Complete example ``` N8N_LOG_STREAMING_MANAGED_BY_ENV=true N8N_LOG_STREAMING_DESTINATIONS='[ { "type": "webhook", "label": "Ops webhook", "enabled": true, "subscribedEvents": ["n8n.workflow", "n8n.audit"], "anonymizeAuditMessages": true, "url": "https://hooks.example.com/n8n", "method": "POST", "sendHeaders": true, "specifyHeaders": "keypair", "headerParameters": { "parameters": [ { "name": "Authorization", "value": "Bearer s3cret" } ] }, "circuitBreaker": { "maxFailures": 5, "failureWindow": 60000 } }, { "type": "sentry", "label": "Sentry prod", "subscribedEvents": ["n8n.workflow"], "dsn": "https://public@sentry.example.com/1" } ]' ``` # n8n v2.0 migration tool The migration tool helps you prepare your n8n instance for upgrading to version 2.0 by identifying workflows and configurations that need attention before the upgrade. You can see all breaking changes for v2 [on this page](../2-0-breaking-changes/). ## Accessing the Tool Navigate to **Settings > Migration Report** to view your compatibility status. User role access The migration tool is available for global admins only. ## Understanding Your Migration Status At the top of the page, you'll see: "X out of Y workflows are compatible with n8n 2.0" This tells you how many workflows will continue working without changes after upgrading. Your goal is to address the issues preventing the remaining workflows from being compatible, as well as global instance issues. ## Viewing Issues The tool organizes potential problems into two categories: ### Workflow Issues Tab Shows breaking changes that affect specific workflows in your instance. What you'll see for each issue: - **Issue title:** A clear name for the problem - **Severity badge (Critical/Medium/Low):** How urgent this is to fix - **Critical:** Fix before upgrading or workflows will fail - **Medium:** May cause unexpected behavior or require attention soon - **Low:** Minor changes or deprecations that won't break functionality - **Description:** Explanation of what's changing and why it matters - **Documentation link:** Click to read detailed migration explanations - **Affected workflow count:** How many of your workflows have this issue #### Workflow Issue Detail Page Click **X workflows affected** to see all affected workflows. What you'll see for each workflow: - **Name:** The workflow name. Click on the name to open the workflow editor. - **State:** Whether workflow is published or not - **Node affected:** The list of all the workflow nodes affected by the issue. You can click on each to open the workflow editor with the specific node view opened. - **Number of executions:** The total number of executions of the workflow - **Last executed:** The date the workflow was last executed - **Last updated:** The date the workflow was last updated ### Instance Issues Tab Shows configuration changes that affect your entire n8n instance, not specific workflows. What you'll see for each issue: - Same information as workflow issues (title, severity, description, docs) - **No workflow count:** These are global settings that apply instance-wide The v2.0 migration tool scans your n8n instance to identify potential compatibility issues and configuration changes required for upgrading to v2.0. This reference details each check the tool performs, explains the impact of detected issues, and provides recommendations to prepare your instance for migration. ## Understanding Empty States ### No Workflow Issues Found All your workflows are compatible with v2.0. Check the **Instance Issues** tab to ensure your server configuration is also ready. ### No Instance Issues Found Your instance configuration is compatible with v2.0. Check the **Workflow Issues** tab to ensure all workflows are also ready. ### Both Tabs Empty Your n8n instance is fully ready to upgrade to version 2.0. ## Recommended Workflow ### Initial Assessment - Review the compatibility summary - Browse all issues in both tabs to understand the scope ### Sort by Severity - Start with Critical issues (they'll break workflows) - Move to Medium issues (may cause problems) - Address Low issues last (deprecation warnings) ### Fix Workflow Issues - Click into each issue to see affected workflows - Read the documentation for fix instructions - Update each workflow as needed - Test workflows in a development environment ### Address Instance Issues - Update environment variables or server configuration - Follow documentation for each instance-level change ### Verify Your Work - Click **Refresh** to re-scan. If you don't see any **Refresh** button, just reload the page to re-scan. - Confirm that unresolved issues don't remain - Verify compatibility count matches total workflows ### Proceed with Upgrade After addressing all issues, you're ready to upgrade to n8n 2.0 # Release notes New features and bug fixes for n8n. You can also view the [Releases](https://github.com/n8n-io/n8n/releases) in the GitHub repository. Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## How to update n8n The steps to update your n8n depend on which n8n platform you use. Refer to the documentation for your n8n: - [Cloud](../manage-cloud/update-cloud-version/) - Self-hosted options: - [npm](../hosting/installation/npm/) - [Docker](../hosting/installation/docker/) ## Semantic versioning in n8n n8n uses [semantic versioning](https://semver.org/). All version numbers are in the format `MAJOR.MINOR.PATCH`. Version numbers increment as follows: - MAJOR version when making incompatible changes which can require user action. - MINOR version when adding functionality in a backward-compatible manner. - PATCH version when making backward-compatible bug fixes. Older versions You can find the release notes for older versions of n8n: [1.x](1-x/) and [0.x](0-x/) ## n8n@2.21.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.21.0...n8n@2.21.1) for this version.\ **Release date:** 2026-05-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.21.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.21.0) for this version.\ **Release date:** 2026-05-12 This release contains bug fixes. ### Contributors [etairl](https://github.com/etairl)\ [devareddy05](https://github.com/devareddy05)\ [aikido-autofix[bot]](https://github.com/apps/aikido-autofix)\ [sudarshan12s](https://github.com/sudarshan12s) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.7 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.6...n8n@2.20.7) for this version.\ **Release date:** 2026-05-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.6 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.5...n8n@2.20.6) for this version.\ **Release date:** 2026-05-08 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.4...n8n@2.20.5) for this version.\ **Release date:** 2026-05-07 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.20.4) for this version.\ **Release date:** 2026-05-07 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.2...n8n@2.20.3) for this version.\ **Release date:** 2026-05-07 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.20.0...n8n@2.20.1) for this version.\ **Release date:** 2026-05-06 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.20.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.0...n8n@2.20.0) for this version.\ **Release date:** 2026-05-05 This release contains bug fixes and features. ### Microsoft Agent 365 Trigger node The [Microsoft Agent 365 Trigger node](../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.microsoftagent365trigger/) lets you build n8n agents that show up as members of your team inside Microsoft 365 apps. Once deployed, your agent gets its own identity in your Microsoft tenant, with an email address you can @mention in Teams, send email to, or grant permissions to a SharePoint site, just like a teammate. A Microsoft Agent 365 Trigger node with a chat model, memory, and tools across Zendesk, Salesforce, PagerDuty, Datadog, and a sub-workflow. #### How it works You build the agent in n8n using the Microsoft Agent 365 Trigger node: - Add a system prompt and give the agent access to tools, MCP servers, and your existing workflows using [sub-workflows as tools](../flow-logic/subworkflows/). - Set up the agent on the Microsoft side (see the [Microsoft Agent 365 documentation](https://learn.microsoft.com/en-us/microsoft-agent-365/) for the configuration steps). This gives the agent an Entra ID identity with an email address that you can @mention, email, or grant SharePoint permissions to. - Microsoft handles identity, lifecycle, security, and compliance for the agent on their side (via Entra ID, Purview, and Defender). n8n handles workflow-level governance like RBAC, credential management, and execution logs. #### Things to keep in mind - If you already use n8n with Microsoft services through individual nodes (Outlook, Teams, SharePoint, and so on), those workflows continue to work as before. Agent 365 is a new path for teams that want their agents to show up *inside* Microsoft apps and interact like a member of the team. - The node requires a Microsoft 365 tenant. For the full launch story, see the [n8n blog post](https://blog.n8n.io/deploy-n8n-agents-that-show-up-as-members-of-the-team-inside-microsoft-apps/). ### Contributors [jjnyn](https://github.com/jjnyn)\ [haimingZZ](https://github.com/haimingZZ)\ [majiayu000](https://github.com/majiayu000)\ [moseoh](https://github.com/moseoh)\ [Omc12](https://github.com/Omc12)\ [Gulianrdgd](https://github.com/Gulianrdgd)\ [jeanibarz](https://github.com/jeanibarz)\ [uppinote20](https://github.com/uppinote20) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.4...n8n@2.19.5) for this version.\ **Release date:** 2026-05-07 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.3...n8n@2.19.4) for this version.\ **Release date:** 2026-05-06 This release contains bug fixes. ## n8n@2.19.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.2...n8n@2.19.3) for this version.\ **Release date:** 2026-05-06 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.1...n8n@2.19.2) for this version.\ **Release date:** 2026-05-01 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.1...n8n@2.19.2) for this version.\ **Release date:** 2026-04-30 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.19.0...n8n@2.19.1) for this version.\ **Release date:** 2026-04-29 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.19.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.0...n8n@2.19.0) for this version.\ **Release date:** 2026-04-28 This release contains bug fixes. ### Contributors [abhishekDeshmukh74](https://github.com/abhishekDeshmukh74)\ [jeanibarz](https://github.com/jeanibarz)\ [yetval](https://github.com/yetval)\ [corazzione](https://github.com/corazzione)\ [kuishou68](https://github.com/kuishou68)\ [MDub3y](https://github.com/MDub3y)\ [aikido-autofix[bot]](https://github.com/apps/aikido-autofix)\ [Vitalini](https://github.com/Vitalini)\ [sinehypernova-0718](https://github.com/sinehypernova-0718)\ [nihaals](https://github.com/nihaals)\ [yuniq-neko](https://github.com/yuniq-neko)\ [patdt](https://github.com/patdt)\ [pedrodicati](https://github.com/pedrodicati)\ [GauthierPLM](https://github.com/GauthierPLM)\ [Gulianrdgd](https://github.com/Gulianrdgd) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.7 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.6...n8n@2.18.7) for this version.\ **Release date:** 2026-05-04 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.6 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.5...n8n@2.18.6) for this version.\ **Release date:** 2026-05-01 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.4...n8n@2.18.5) for this version.\ **Release date:** 2026-04-28 This release contains bug fixes and features. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.3...n8n@2.18.4) for this version.\ **Release date:** 2026-04-27 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.2...n8n@2.18.3) for this version.\ **Release date:** 2026-04-24 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.1...n8n@2.18.2) for this version.\ **Release date:** 2026-04-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.18.0...n8n@2.18.1) for this version.\ **Release date:** 2026-04-22 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.18.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.0...n8n@2.18.0) for this version.\ **Release date:** 2026-04-21 This release contains bug fixes. ### Contributors [jcampbell](https://github.com/jcampbell)\ [robrown-hubspot](https://github.com/robrown-hubspot)\ [rahulps1000](https://github.com/rahulps1000) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.8 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.7...n8n@2.17.8) for this version.\ **Release date:** 2026-04-27 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.7 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.6...n8n@2.17.7) for this version.\ **Release date:** 2026-04-24 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.6 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.5...n8n@2.17.6) for this version.\ **Release date:** 2026-04-23 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.4...n8n@2.17.5) for this version.\ **Release date:** 2026-04-22 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.3...n8n@2.17.4) for this version.\ **Release date:** 2026-04-22 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.2...n8n@2.17.3) for this version.\ **Release date:** 2026-04-20 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.1...n8n@2.17.2) for this version.\ **Release date:** 2026-04-16 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.17.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.17.0...n8n@2.17.1) for this version.\ **Release date:** 2026-04-15 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.16.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.16.0...n8n@2.16.1) for this version.\ **Release date:** 2026-04-15 This release contains bug fixes. ## n8n@2.16.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.15.0...n8n@2.16.0) for this version.\ **Release date:** 2026-04-07 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.15.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.14.0...n8n@2.15.0) for this version.\ **Release date:** 2026-03-30 This release contains bug fixes. ### Contributors [manusjs](https://github.com/manusjs) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.14.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.14.1...n8n@2.14.2) for this version.\ **Release date:** 2026-03-26 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.14.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.14.0...n8n@2.14.1) for this version.\ **Release date:** 2026-03-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.14.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.13.2...n8n@2.14.0) for this version.\ **Release date:** 2026-03-24 This release contains bug fixes. ### Contributors [pkaya89](https://github.com/pkaya89)\ [kesku](https://github.com/kesku) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.13.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.13.3...n8n@2.13.4) for this version.\ **Release date:** 2026-03-26 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.13.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.13.2...n8n@2.13.3) for this version.\ **Release date:** 2026-03-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.13.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.13.1...n8n@2.13.2) for this version.\ **Release date:** 2026-03-20 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.13.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.13.0...n8n@2.13.1) for this version.\ **Release date:** 2026-03-18 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.13.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.12.0...n8n@2.13.0) for this version.\ **Release date:** 2026-03-16 This release contains bug fixes and features. ### Visual diff comes to version history Open version history, click **Compare changes**, pick any two versions, and the canvas renders both side by side with changed nodes highlighted. A change count badge on each version helps you spot significant edits at a glance. Visual diff is available on Cloud Pro and above. ### Project-scoped external secrets: full team access (Enterprise) What's new: - Project admins manage their own vault connections from project settings. - Project editors can use project-scoped secrets in credentials once the instance admin enables access. - [Custom roles](../user-management/rbac/custom-roles/) now include five secrets scopes: list, read, create, update, and delete. - Instance admins/owners no longer need to be project members for secrets to resolve. **For instance admins:** go to **Settings > External Secrets** and enable the **System Roles** toggle, or use custom roles for more granular control. **For project admins:** go to **Project Settings > External Secrets** to create and manage project-level connections. Instance-level connections shared with you appear as read-only. Refer to [External secrets](../external-secrets/) for more information. Project-scoped external secrets are available on n8n Enterprise. ### Folder-based filtering in the push and pull dialog (Enterprise) The push and pull dialogs now include a **Folder** filter alongside Status and Owner. Selecting a folder scopes the list to workflows in that folder and its subfolders, shown as a hierarchical tree with folder-level checkboxes. Text search also matches folder names. Folder-based filtering is available on n8n Enterprise (requires [Environments](../source-control-environments/setup/) configured). ### Contributors [tbigby-kristin](https://github.com/tbigby-kristin)\ [ajuijas](https://github.com/ajuijas)\ [ByteEVM](https://github.com/ByteEVM)\ [mjain](https://github.com/mjain)\ [bram2w](https://github.com/bram2w) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.12.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.12.1...n8n@2.12.2) for this version.\ **Release date:** 2026-03-13 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.12.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.12.0...n8n@2.12.1) for this version.\ **Release date:** 2026-03-11 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.12.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.11.0...n8n@2.12.0) for this version.\ **Release date:** 2026-03-09 This release contains bug fixes and features. ### 1Password is now available as an external secrets provider (Enterprise) n8n now supports 1Password Connect Server as an [external secrets](../external-secrets/) provider, alongside HashiCorp Vault, AWS Secrets Manager, Azure Key Vault, and GCP Secret Manager. Secrets are fetched at runtime and never stored in n8n: 1Password stays the single source of truth. Multi-field items are available as structured sub-paths: `$secrets...`. #### How to connect 1. Deploy a 1Password Connect Server and create an access token scoped to the vaults n8n should read. 1. In n8n, go to **Settings > External Secrets**, select **1Password**, and enter your Connect Server URL and token. Requires self-hosted 1Password Connect Server with read-only access. 1Password as an external secrets provider is available on n8n Enterprise. ### Contributors `github-actions[bot]` [amenk](https://github.com/amenk)\ [bpk9](https://github.com/bpk9) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.11.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.11.3...n8n@2.11.4) for this version.\ **Release date:** 2026-03-13 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.11.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.11.2...n8n@2.11.3) for this version.\ **Release date:** 2026-03-13 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.11.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.11.1...n8n@2.11.2) for this version.\ **Release date:** 2026-03-06 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.11.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.11.0...n8n@2.11.1) for this version.\ **Release date:** 2026-03-04 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.11.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.10.0...n8n@2.11.0) for this version.\ **Release date:** 2026-03-02 This release contains bug fixes and features. ### Easier credential setup on Cloud Setting up credentials on n8n Cloud is now much simpler. For supported services, just click the **Connect** button, authenticate with the service, and you're ready to go. Skip the manual setup for Slack, Firecrawl, HubSpot, GitHub, Google Calendar, PagerDuty, Apify, and more. Setting up Slack credentials with managed OAuth. #### Things to keep in mind - If you prefer to use your own OAuth configuration, you can still switch to manual setup from the auth mode dropdown at any time. - This feature is only available on n8n Cloud, where n8n manages the OAuth apps on your behalf. ### Custom roles: Assignments tab (Enterprise) Instance admins now have a dedicated **Assignments** tab on each [custom role](../user-management/rbac/custom-roles/) showing every user assigned to that role, which project they're in, and a direct link to manage them — no more navigating project by project. Custom roles are available on n8n Enterprise. ### Project-scoped external secrets: instance admin setup (Enterprise) Instance admins can now create vault connections scoped to a specific project. Secrets from that connection appear only within that project's credentials, not across the instance. Instance-level connections are unaffected. Refer to [External secrets](../external-secrets/) for more information. Project-scoped external secrets are available on n8n Enterprise. ### Workflow execute as a separate permission scope (Enterprise) `workflow:execute` is now a distinct scope in [custom project roles](../user-management/rbac/custom-roles/), separate from editing and publishing. Users can be granted run access without being able to modify the workflow, which is a common compliance requirement for sensitive workflows. This scope is available on n8n Enterprise. ### Contributors [ByteEVM](https://github.com/ByteEVM)\ [onyxraven](https://github.com/onyxraven) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.10.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.10.3...n8n@2.10.4) for this version.\ **Release date:** 2026-03-06 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.10.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.10.2...n8n@2.10.3) for this version.\ **Release date:** 2026-03-04 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.10.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.10.1...n8n@2.10.2) for this version.\ **Release date:** 2026-02-27 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.10.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.10.0...n8n@2.10.1) for this version.\ **Release date:** 2026-02-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.10.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.0...n8n@2.10.0) for this version.\ **Release date:** 2026-02-23 This release contains bug fixes and features. ### Multiple connections per external secrets provider You can now set up more than one connection for a single [external secrets](../external-secrets/) provider. The updated UI makes it easier to configure and manage multiple connections under the same provider type. ### Performance improvements for large workflow and credential volumes Improved the reliability of the workflows and credentials listing pages for large-scale instances, reducing loading times by 30% to 80%. ### Contributors [peteawood](https://github.com/peteawood)\ [horiyee](https://github.com/horiyee) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.3...n8n@2.9.4) for this version.\ **Release date:** 2026-02-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.4-exp.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.4...n8n@2.9.4-exp.0) for this version.\ **Release date:** 2026-02-27 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.2...n8n@2.9.3) for this version.\ **Release date:** 2026-02-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.3-exp.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.3...n8n@2.9.3-exp.0) for this version.\ **Release date:** 2026-02-25 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.1...n8n@2.9.2) for this version.\ **Release date:** 2026-02-23 For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.9.0...n8n@2.9.1) for this version.\ **Release date:** 2026-02-18 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.9.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.8.0...n8n@2.9.0) for this version.\ **Release date:** 2026-02-16 This release contains bug fixes. ### Contributors [ByteEVM](https://github.com/ByteEVM) [LudwigGerdes](https://github.com/LudwigGerdes) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.8.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.8.3...n8n@2.8.4) for this version.\ **Release date:** 2026-02-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.8.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.8.2...n8n@2.8.3) for this version.\ **Release date:** 2026-02-13 This release contains a bug fix and features. ### Personal space policies (Enterprise) A new **Security & policies** settings section provides a central place for enforcing security requirements on your instance. In addition to the existing two-factor authentication enforcement, admins can now control what users can do in their personal spaces. Available policies include: - **Sharing**: control whether users can share workflows and credentials from their personal space. - **Workflow publishing**: control whether users can publish workflows from their personal space. This release builds on the recent updates to the permissions model, including [custom project roles](../user-management/rbac/custom-roles/), to better support policy-driven governance. Personal space policies are available on n8n Enterprise. The new Security & policies settings section. ### Custom roles: improved discoverability and permission visibility (Enterprise) The project role selector now separates built-in system roles and custom roles into distinct sections, making it easier to find and choose the right role. Hovering over a role shows a summary of its configured permissions, with an option to view the full permission details. System roles and custom roles are now displayed in separate sections. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.8.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.8.1...n8n@2.8.2) for this version.\ **Release date:** 2026-02-12 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.8.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.8.0...n8n@2.8.1) for this version.\ **Release date:** 2026-02-11 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.4...n8n@2.7.5) for this version.\ **Release date:** 2026-02-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.3...n8n@2.7.4) for this version.\ **Release date:** 2026-02-11 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.2...n8n@2.7.3) for this version.\ **Release date:** 2026-02-09 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.6.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.6.3...n8n@2.6.4) for this version.\ **Release date:** 2026-02-06 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.8.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.0...n8n@2.8.0) for this version.\ **Release date:** 2026-02-09 This release contains bug fixes and features. ### Stronger external secrets validation (Enterprise) n8n now verifies that the current user has access to the referenced vaults before allowing a credential that uses **$secrets...** expressions to be saved. If access is missing, the save operation fails. This prevents secret values from being exposed through guessed secret paths. ### Improved API auditability (Enterprise) API endpoints have been expanded to provide clearer visibility into project membership and credentials: - `GET /projects/{projectId}/users` returns all members of a project including their assigned role. - `GET /credentials` returns a paginated list of all credentials across the instance, including the project they belong to. This makes it easier to audit who has access to which projects and credentials without manually reviewing each one in the UI. ### More granular workflow permissions Workflow publishing permissions for [custom roles](../user-management/rbac/custom-roles/) have been split into two separate scopes: **workflow:publish** and **workflow:unpublish**. This enables more precise access control in governance scenarios where unpublishing needs to be managed independently. ### Performance and stability improvements - Improved performance for instances with very large user counts, reducing slowdowns caused by user-related operations. - Fixed a high-memory issue that could cause crashes during Source Control push flows in large deployments with many workflows and credentials. ### Minor fixes - Canvas: improved node repositioning on insertion to reduce overlaps and spacing issues. - Log streaming: fixed proxy configuration handling for webhook destinations so requests work reliably when a proxy is configured. ### Deprecated nodes #### Motorhead node The [Motorhead](../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymotorhead/) memory node is now deprecated because the Motorhead project is no longer maintained. The node is hidden from the nodes panel for new selections, but existing workflows using this node will continue to work. ### Contributors [AmitAnveri](https://github.com/AmitAnveri)\ [derandreas-dt](https://github.com/derandreas-dt)\ [ongdisheng](https://github.com/ongdisheng)\ [vCaisim](https://github.com/vCaisim) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.1...n8n@2.7.2) for this version.\ **Release date:** 2026-02-04 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.7.0...n8n@2.7.1) for this version.\ **Release date:** 2026-02-03 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.7.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.6.0...n8n@2.7.0) for this version.\ **Release date:** 2026-02-02 This release contains bug fixes. ### Contributors [LostInBrittany](https://github.com/LostInBrittany)\ [adriencohen](https://github.com/adriencohen)\ [ibex088](https://github.com/ibex088)\ [rutgere-indeed](https://github.com/rutgere-indeed) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.7-exp.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.7...n8n@2.4.7-exp.0) for this version.\ **Release date:** 2026-01-29 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.6.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.6.2...n8n@2.6.3) for this version.\ **Release date:** 2026-02-02 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.8 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.7...n8n@2.4.8) for this version.\ **Release date:** 2026-01-29 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.6.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.6.1...n8n@2.6.2) for this version.\ **Release date:** 2026-01-28 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.7 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.6...n8n@2.4.7) for this version.\ **Release date:** 2026-01-28 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.5.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.5.1...n8n@2.5.2) for this version.\ **Release date:** 2026-01-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.6 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.5...n8n@2.4.6) for this version.\ **Release date:** 2026-01-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.4...n8n@2.4.5) for this version.\ **Release date:** 2026-01-22 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.6.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.5.0...n8n@2.6.0) for this version.\ **Release date:** 2026-01-26 This release contains bug fixes. ### Human-in-the-loop for AI tool calls You can now require explicit human approval before an AI Agent executes specific tools. Human-in-the-loop (HITL) for AI tool calls enforces review directly at the tool level. A gated tool cannot execute unless a human explicitly approves the action, giving you deterministic control over high-impact operations like deleting records, writing to production systems, or sending high-impact emails. This removes the uncertainty of prompt-based safeguards and insulates you from probabilistic agent behavior. Because the review step is implemented using standard n8n integrations, approvals are not limited to a single user or interface. Decisions can be routed across people and systems, enforcing approval from the right person using the channels they already work in. #### What you can do - Require explicit human approval for any tool the agent can call, including the MCP Client tool or sub-workflows exposed as tools - Apply approval selectively, so some tools execute autonomously while others require review - Route approvals across users and channels (for example, send a Slack-initiated action for approval by another user via email) - Add safety checks for high-impact or potentially destructive operations without complex workflow patterns or brittle prompt logic. #### How to use it Start with a workflow where an AI Agent is connected to one or more tools. 1. On the connection from the AI Agent to the tool you want to gate, click the **+** icon and choose **Add human review step** (hovering over the icon shows the tooltip). 1. The **Tools panel** opens with nodes you can use to handle the review step. Select the one you want to use. 1. Configure the approval step in the added node’s parameters. Depending on the integration, you can define the approver, the message they receive, the available actions (for example, approve or deny), and the associated buttons. [](/_video/release-notes/HITLToolCalls.webm) Get precise control over where human judgment is required, without limiting what your agent can do. Learn more [here](../advanced-ai/human-in-the-loop-tools/). ### Contributors [ibex088](https://github.com/ibex088)\ [johnlinp](https://github.com/johnlinp)\ [loganaden](https://github.com/loganaden)\ [Jameswlepage](https://github.com/Jameswlepage)\ [cesars-gh](https://github.com/cesars-gh)\ [antman1p](https://github.com/antman1p) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.5.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.0...n8n@2.5.0) for this version.\ **Release date:** 2026-01-20 This release contains bug fixes. ### Chat node: human-in-the-loop actions The **Chat** node now includes two new Actions for human-in-the-loop interactions in agentic workflows: - **Send a message**: send a message to the user and continue the workflow - **Send a message and wait for response**: send a message and pause execution until the user replies. Users can respond with free text in the Chat or by clicking inline approval buttons, as defined in the node’s configuration. These Actions can be used as deterministic workflow steps or as tools for an **AI Agent**, enabling multi-turn human interaction within a single execution when using the **Chat Trigger**. When used as an agent tool, the agent can ask for clarification before proceeding, helping it better interpret user intent and follow instructions. Agents can also send updates during long-running workflows using these Actions. #### How to 1. Trigger your workflow with the **Chat Trigger** node. In the node parameters, add the *Response Mode* option and set it to *Using Response Nodes*. 1. Add a **Chat** node later in the workflow, or add it as a tool for an **AI Agent**. Select one of the following operations: *Send a message* or *Send a message and wait for response*. #### Keep in mind - If you want an AI Agent to choose between sending a message or waiting for input, add two **Chat** tool nodes, one for each action. - For AI Agents triggered by the **Chat Trigger** node, adding **Send a message and wait for response** is recommended so the agent can request clarification when needed. Learn more in the [Chat node documentation](https://docs.n8n.io/integrations/builtin/core-nodes/n8n-nodes-langchain.chat/#operation). [](/_video/release-notes/ChatHITL.webm) ### Contributors [AbdulTawabJuly](https://github.com/AbdulTawabJuly)\ [ByteEVM](https://github.com/ByteEVM)\ [sudarshan12s](https://github.com/sudarshan12s)\ [KaanAydinli](https://github.com/KaanAydinli) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.2...n8n@2.4.3) for this version.\ **Release date:** 2026-01-15 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.6 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.5...n8n@2.3.6) for this version.\ **Release date:** 2026-01-16 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.3...n8n@2.4.4) for this version.\ **Release date:** 2026-01-16 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.4...n8n@2.3.5) for this version.\ **Release date:** 2026-01-14 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.4.0...n8n@2.4.1) for this version.\ **Release date:** 2026-01-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.1...n8n@2.3.2) for this version.\ **Release date:** 2026-01-09 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.4.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.0...n8n@2.4.0) for this version.\ **Release date:** 2026-01-12 This release contains bug fixes. ### TLS support for Syslog log streaming The Syslog log streaming destination now supports TLS over TCP for encrypted connections. This enables secure log streaming to enterprise SIEM and observability platforms that require encrypted transport. With this release, log streaming is now compatible with a broader range of enterprise SIEM platforms. ### Update credentials via API n8n's public API now supports updating existing credentials by ID via a new *PATCH /credentials/:id* endpoint. Previously, credentials could only be created through the API so any changes required deleting and recreating the credential. When updating, you can either replace all credential data at once (useful for bulk updates) or set *isPartialData: true* to merge changes with existing data. Ideal for automated secret rotation or fixing individual values without losing your configuration. ### Contributors [JonLaliberte](https://github.com/JonLaliberte)\ [davidpanic](https://github.com/davidpanic)\ [TomTom101](https://github.com/TomTom101)\ [garritfra](https://github.com/garritfra)\ [maximepvrt](https://github.com/maximepvrt) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.2...n8n@2.3.3) for this version.\ **Release date:** 2026-01-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.3...n8n@2.3.4) for this version.\ **Release date:** 2026-01-13 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.3.0...n8n@2.3.1) for this version.\ **Release date:** 2026-01-07 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.4...n8n@2.2.5) for this version.\ **Release date:** 2026-01-08 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.3...n8n@2.2.4) for this version.\ **Release date:** 2026-01-06 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.3.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.0...n8n@2.3.0) for this version.\ **Release date:** 2026-01-05 This release contains bug fixes. ### Contributors [Shashwat-06](https://github.com/Shashwat-06)\ [ByteEVM](https://github.com/ByteEVM)\ [mithredate](https://github.com/mithredate)\ [belyas](https://github.com/belyas)\ [saurabhssonkar](https://github.com/saurabhssonkar) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.2...n8n@2.2.3) for this version.\ **Release date:** 2026-01-05 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.1...n8n@2.2.2) for this version.\ **Release date:** 2025-12-30 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.5 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.4...n8n@2.1.5) for this version.\ **Release date:** 2025-12-30 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.3...n8n@2.1.4) for this version.\ **Release date:** 2025-12-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.2...n8n@2.1.3) for this version.\ **Release date:** 2025-12-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.2.0...n8n@2.2.1) for this version.\ **Release date:** 2025-12-23 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.2.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.0...n8n@2.2.0) for this version. **Release date:** 2025-12-22 This release contains bug fixes. ### More granular workflow permissions within Custom Project Roles (Enterprise) Custom Project Roles allow you to define fine-grained permissions at the project level. With this release, workflow permissions have been further refined by separating workflow editing from workflow publishing. This change makes it easier to align access controls with internal processes where building workflows and publishing them are handled by different users or teams. Custom Project Roles ### Log streaming: More audit events for improved observability Log streaming now includes additional audit events to improve visibility into operational and security-relevant changes. This update adds events for manual workflow cancellations and workflow activation/deactivation (publish/unpublish), variable lifecycle events (create/update/delete), and user management actions (including enabling/disabling 2FA). Workflow settings updates are also logged with the specific parameters that changed (for example, selecting a new error workflow), instead of a generic “updated” event. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.1...n8n@2.1.2) for this version.\ **Release date:** 2025-12-22 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.1.0...n8n@2.1.1) for this version.\ **Release date:** 2025-12-17 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.2...n8n@2.0.3) for this version.\ **Release date:** 2025-12-17 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.1.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0...n8n@2.1.0) for this version.\ **Release date:** 2025-12-16 This release contains bug fixes and features. ### **Time Saved node** Previously, teams could only track a single fixed time saved value for each workflow regardless of which path an execution takes. The new Time Saved node enables more precise time savings calculations where different execution paths will save different amounts of time. With this release you can now: - **Choose fixed value or dynamic time tracking**: Use fixed time saved for simple workflows, or use one or many time saved nodes to calculate savings dynamically based on the actual execution path taken - **Configure per-item calculations**: When using the Time Saved node, choose whether to calculate time saved once for all items or multiply by the number of items processed The new Time Saved node provides increased accuracy for complex workflows where different execution paths save different amounts of time. n8n automatically totals the time from all Time Saved nodes executed during each workflow run and reports it within the insights dashboard. ### Contributors [Akcthecoder200](https://github.com/Akcthecoder200)\ [rishiraj-58](https://github.com/rishiraj-58)\ [rlafferty](https://github.com/rlafferty) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.1...n8n@2.0.2) for this version.\ **Release date:** 2025-12-12 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0...n8n@2.0.1) for this version.\ **Release date:** 2025-12-10 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0-rc.3...n8n@2.0.0) for this version.\ **Release date:** 2025-12-05 ### Major Version Change n8n 2.0.0 is a hardening release, not a feature release. It strengthens n8n's position as an enterprise-grade platform with secure-by-default execution, removal of legacy options that caused edge-case bugs, and better performance under load. The goal is a more predictable foundation you can rely on for mission-critical workflows. This release is currently in **beta**. There's no urgency to upgrade immediately — take time to review the breaking changes and assess your workflows using the migration tool before upgrading. For the full story behind 2.0, read our [announcement blog post](https://blog.n8n.io/introducing-n8n-2-0/). ### Breaking changes Version 2.0 includes breaking changes across security defaults, data handling, and configuration. Key changes include: - Task runners enabled by default (Code node executions now run in isolated environments) - Environment variable access blocked from Code nodes by default - ExecuteCommand and LocalFileTrigger nodes disabled by default - In-memory binary data mode removed Review the complete list and migration guidance in the [v2.0 breaking changes docs.](https://docs.n8n.io/2-0-breaking-changes/) ### Before you upgrade Use the **Migration Report** tool to identify workflow-level and instance-level issues that need attention before upgrading. See the [v2.0 migration tool docs](https://docs.n8n.io/migration-tool-v2/) for details. ### Product updates **Publish / Save workflow paradigm** n8n 2.0 introduces a safer approach to updating live workflows. The `Save` button now preserves your edits without changing production. A new `Publish` button lets you explicitly push changes live when ready. See [Publish workflows](https://docs.n8n.io/workflows/publish/) for details. **Canvas and navigation improvements** Subtle refinements to the workflow editor canvas and reorganized sidebar navigation. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0-rc.4 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0-rc.3...n8n@2.0.0-rc.4) for this version.\ **Release date:** 2025-12-05 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0-rc.3 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0-rc.2...n8n@2.0.0-rc.3) for this version.\ **Release date:** 2025-12-04 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0-rc.2 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0-rc.1...n8n@2.0.0-rc.2) for this version.\ **Release date:** 2025-12-04 This release contains a bug fix. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0-rc.1 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@2.0.0-rc.0...n8n@2.0.0-rc.1) for this version.\ **Release date:** 2025-12-04 This release contains bug fixes. For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. ## n8n@2.0.0-rc.0 View the [commits](https://github.com/n8n-io/n8n/compare/n8n@1.122.0...n8n@2.0.0-rc.0) for this version.\ **Release date:** 2025-12-02 This release contains bug fixes. ### Contributors [farzad528](https://github.com/farzad528) For full release details, refer to [Releases](https://github.com/n8n-io/n8n/releases) on GitHub. Older versions You can find the release notes for older versions of n8n: [1.x](1-x/) and [0.x](0-x/) # Security settings Feature availability Security settings are available on Business and Enterprise plans. Some settings require specific license features. Settings that aren't available on your plan display an **Upgrade** badge. Security settings let you manage instance-wide security policies. You can enforce two-factor authentication for all users and control what users can do in their personal spaces. To access security settings, navigate to **Settings** > **Security**. ## Enforce two-factor authentication You can require all users on your instance to set up two-factor authentication (2FA) when they sign in with email and password. Applies to email and password logins only 2FA enforcement applies to users authenticating with email and password. Users signing in through SSO (SAML or OIDC) aren't affected by this setting. To enforce 2FA: 1. Navigate to **Settings** > **Security**. 1. In the **Enforce two-factor authentication** section, toggle the switch on. When you enable this setting: - All users must set up 2FA before they can continue using the instance. - Users who haven't configured 2FA yet are prompted to do so on their next sign-in. To stop enforcing 2FA, toggle the switch off. Users who already set up 2FA keep it enabled but new users are no longer required to configure it. Refer to [Two-factor authentication](../user-management/two-factor-auth/) for more information on how individual users can set up 2FA. ## Personal space policies Personal space policies let instance admins control whether users can share and publish workflows and credentials from their personal spaces. ### Sharing workflows and credentials Controls whether users can share workflows and credentials from their personal space with other users or projects. To manage sharing: 1. Navigate to **Settings** > **Security**. 1. In the **Personal Space** section, find **Sharing workflows and credentials**. 1. Toggle the switch to enable or disable sharing. When you disable sharing: - Existing shares remain in place. The setting only affects new sharing actions. - The number of currently shared workflows and credentials is displayed below the toggle. ### Publishing workflows Controls whether users can publish workflows from their personal space to make them available for execution. To manage publishing: 1. Navigate to **Settings** > **Security**. 1. In the **Personal Space** section, find **Publishing workflows**. 1. Toggle the switch to enable or disable publishing. When you disable publishing: - Currently published workflows remain published. The setting only affects new publish actions. - The number of currently published personal workflows is displayed below the toggle. ## Configure security policy with environment variables You can also manage security policy settings from environment variables instead of through the UI. Available from n8n v2.18.0. Set `N8N_SECURITY_POLICY_MANAGED_BY_ENV` to `true` and provide the variables below. See [Manage instance settings using environment variables](../hosting/configuration/settings-env-vars/) for how the activation pattern works. When `N8N_SECURITY_POLICY_MANAGED_BY_ENV` is `true`, the **Enforce two-factor authentication** and **Personal Space** toggles on this page become read-only. | Variable | Type | Default | Description | | --------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SECURITY_POLICY_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the security policy from environment variables. When `true`, n8n applies the security policy variables on every startup and locks the matching UI controls. | | `N8N_MFA_ENFORCED_ENABLED` | Boolean | `false` | Whether to enforce two-factor authentication for all users (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_PUBLISHING_ENABLED` | Boolean | `true` | Whether users can publish from their personal space (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_SHARING_ENABLED` | Boolean | `true` | Whether users can share resources from their personal space (`true`) or not (`false`). | # Sustainable Use License Proprietary licenses for Enterprise Proprietary licenses are available for enterprise customers. [Get in touch](mailto:license@n8n.io) for more information. n8n's [Sustainable Use License](https://github.com/n8n-io/n8n/blob/master/LICENSE.md) and [n8n Enterprise License](https://github.com/n8n-io/n8n/blob/master/LICENSE_EE.md) are based on the [fair-code](https://faircode.io/) model. ## License FAQs ### What license do you use? n8n uses the [Sustainable Use License](https://github.com/n8n-io/n8n/blob/master/LICENSE.md) and [n8n Enterprise License](https://github.com/n8n-io/n8n/blob/master/LICENSE_EE.md). These licenses are based on the [fair-code](https://faircode.io/) model. ### What source code is covered by the Sustainable Use License? The [Sustainable Use License](https://github.com/n8n-io/n8n/blob/master/LICENSE.md) applies to all our source code hosted in our [main GitHub repository](https://github.com/n8n-io/n8n) except: - Content of branches other than master. - Source code files that contain `.ee.` in their file name. These are licensed under the [n8n Enterprise License](https://github.com/n8n-io/n8n/blob/master/LICENSE_EE.md). ### What is the Sustainable Use License? The Sustainable Use License is a fair-code software license created by n8n in 2022. You can read more about why we did this [here](#why-did-you-create-a-license). The license allows you the free right to use, modify, create derivative works, and redistribute, with three limitations: - You may use or modify the software only for your own internal business purposes or for non-commercial or personal use. - You may distribute the software or provide it to others only if you do so free of charge for non-commercial purposes. - You may not alter, remove, or obscure any licensing, copyright, or other notices of the licensor in the software. Any use of the licensor's trademarks is subject to applicable law. We encourage anyone who wants to use the Sustainable Use License. If you are building something out in the open, it makes sense to think about licensing earlier in order to avoid problems later. Contact us at [license@n8n.io](mailto:license@n8n.io) if you would like to ask any questions about it. ### What is and isn't allowed under the license in the context of n8n's product? Our license restricts use to "internal business purposes". In practice this means all use is allowed unless you are selling a product, service, or module in which the value derives entirely or substantially from n8n functionality. Here are some examples that wouldn't be allowed: - White-labeling n8n and offering it to your customers for money. - Hosting n8n and charging people money to access it. All of the following examples are allowed under our license: - Using n8n to sync the data you control as a company, for example from a CRM to an internal database. - Creating an n8n node for your product or any other integration between your product and n8n. - Providing consulting services related to n8n, for example building workflows, custom features closely connect to n8n, or code that gets executed by n8n. - Supporting n8n, for example by setting it up or maintaining it on an internal company server. #### Can I use n8n to act as the back-end to power a feature in my app? Usually yes, as long as the back-end process doesn't use users' own credentials to access their data. Here are two examples to clarify: ##### Example 1: Sync ACME app with HubSpot Bob sets up n8n to collect a user's HubSpot credentials to sync data in the ACME app with data in HubSpot. **NOT ALLOWED** under the Sustainable Use License. This use case collects the user's own HubSpot credentials to pull information to feed into the ACME app. ##### Example 2: Embed AI chatbot in ACME app Bob sets up n8n to embed an AI chatbot within the ACME app. The AI chatbot's credentials in n8n use Bob's company credentials. ACME app end-users only enter their questions or queries to the chatbot. **ALLOWED** under the Sustainable Use License. No user credentials are being collected. ### What if I want to use n8n for something that's not permitted by the license? You must sign a separate commercial agreement with us. We actively encourage software creators to build with n8n in their products; we just ask them to sign an agreement laying out the terms of use, and the fees owed to n8n for using the product in this way. [Contact us](mailto:license@n8n.io) to learn more. If you are unsure whether the use case you have in mind constitutes an internal business purpose or not, take a look at [the examples](#what-is-and-isnt-allowed-under-the-license-in-the-context-of-n8ns-product), and if you're still unclear, email us at [license@n8n.io](mailto:license@n8n.io). ### Why don't you use an open source license? n8n's mission is to give everyone who uses a computer technical superpowers. We've decided the best way for us to achieve this mission is to make n8n as widely and freely available as possible for users, while ensuring we can build a sustainable, viable business. By making our product free to use, easy to distribute, and source-available we help everyone access the product. By operating as a business, we can continue to release features, fix bugs, and provide reliable software at scale long-term. ### Why did you create a license? Creating a license was our least favorite option. We only went down this path after reviewing the possible existing licenses and deciding nothing fit our specific needs. There are two ways in which we try to mitigate the pain and friction of using a proprietary license: 1. By using plain English, and keeping it as short as possible. 1. By promoting [fair-code](https://faircode.io/) with the goal of making it a well-known umbrella term to describe software models like ours. Our goals when we created the Sustainable Use License were: 1. To be as permissive as possible. 1. Safeguarding our ability to build a business. 1. Being as clear as possible what use was permitted or not. ### My company has a policy against using code that restricts commercial use – can I still use n8n? Provided you are using n8n for internal business purposes, and not making n8n available to your customers for them to connect their accounts and build workflows, you should be able to use n8n. If you are unsure whether the use case you have in mind constitutes an internal business purpose or not, take a look at [the examples](#what-is-and-isnt-allowed-under-the-license-in-the-context-of-n8ns-product), and if you're still unclear, email us at [license@n8n.io](mailto:license@n8n.io). ### What happens to the code I contribute to n8n in light of the Sustainable Use License? Any code you contribute on GitHub is subject to GitHub's [terms of use](https://docs.github.com/en/site-policy/github-terms/github-terms-of-service#d-user-generated-content). In simple terms, this means you own, and are responsible for, anything you contribute, but that you grant other GitHub users certain rights to use this code. When you contribute code to a repository containing notice of a license, you license the code under the same terms. n8n asks every contributor to sign our [Contributor License Agreement](https://github.com/n8n-io/n8n/blob/master/CONTRIBUTOR_LICENSE_AGREEMENT.md). In addition to the above, this gives n8n the ability to change its license without seeking additional permission. It also means you aren't liable for your contributions (e.g. in case they cause damage to someone else's business). It's easy to get started contributing code to n8n [here](https://github.com/n8n-io), and we've listed broader ways of participating in our community [here](../help-community/contributing/). ### Why did you switch to the Sustainable Use License from your previous license arrangement (Apache 2.0 with Commons Clause)? n8n was licensed under Apache 2.0 with Commons Clause until 17 March 2022. Commons Clause was initiated by various software companies wanting to protect their rights against cloud providers. The concept involved adding a commercial restriction on top of an existing open source license. However, the use of the Commons Clause as an additional condition to an open source license, as well as the use of wording that's open to interpretation, created some confusion and uncertainty regarding the terms of use. The Commons Clause also restricted people's ability to offer consulting and support services: we realized these services are critical in enabling people to get value from n8n, so we wanted to remove this restriction. We created the Sustainable Use License to be more permissive and more clear about what use is allowed, while continuing to ensure n8n gets the funding needed to build and improve our product. ### What are the main differences between the Sustainable Use License and your previous license arrangement (Apache 2.0 with Commons Clause)? There are two main differences between the Sustainable Use License and our previous license arrangement. The first is that we have tightened the definition of how you can use the software. Previously the Commons Clause restricted users ability to "sell" the software; we have redefined this to restrict use to internal business purposes. The second difference is that our previous license restricted people's ability to charge fees for consulting or support services related to the software: we have lifted that restriction altogether. That means you are now free to offer commercial consulting or support services (e.g. building n8n workflows) without the need for a separate license agreement with us. If you are interested in joining our community of n8n experts providing these services, you can learn more here. ### Is n8n open source? Although n8n's source code is available under the Sustainable Use License, according to the [Open Source Initiative](https://opensource.org/) (OSI), open source licenses can't include limitations on use, so we do not call ourselves open source. In practice, n8n offers most users many of the same benefits as OSI-approved open source. We coined the term ['fair-code'](https://faircode.io/) as a way of describing our licensing model, and the model of other companies who are source-available, but restrict commercial use of their source code. ### What is fair-code, and how does the Sustainable Use License relate to it? Fair-code isn't a software license. It describes a software model where software: - Is generally free to use and can be distributed by anybody. - Has its source code openly available. - Can be extended by anybody in public and private communities. - Is commercially restricted by its authors. The Sustainable Use License is a fair-code license. You can read more about it and see other examples of fair-code licenses [here](https://faircode.io/). We're always excited to talk about software licenses, fair-code, and other principles around sharing code with interested parties. To get in touch to chat, email [license@n8n.io](mailto:license@n8n.io). ### Can I use n8n's Sustainable Use License for my own project? Yes! We're excited to see more software use the Sustainable Use License. We'd love to hear about your project if you're using our license: [license@n8n.io](mailto:license@n8n.io). # Video courses n8n provides two video courses on YouTube. For support, join the [Forum](https://community.n8n.io/). ## Beginner The [Beginner](https://www.youtube.com/playlist?list=PLlET0GsrLUL59YbxstZE71WszP3pVnZfI) course covers the basics of n8n: - [Introduction and workflows](https://youtu.be/4BVTkqbn_tY?si=g2A5eD8kAoia5k6y) - [APIs and Webhooks](https://youtu.be/y_cpFMF1pzk?si=zi3wM4W7nx8Jkcw3) - [Nodes](https://youtu.be/rCPXBkeBWCQ?si=-T2iUsydwS5ym6yI) - [Data in n8n](https://youtu.be/2YfWuNziPE4?si=4jB-fubG1_T0HXYx) - [Core workflow concepts](https://youtu.be/kkrA7tGHYNo?si=mLVbuV98ohL5YVnm) - [Useful nodes](https://youtu.be/Rmi-ckbMOQE?si=H_dF77uf5KJU7RtH) - [Error handling](https://youtu.be/XEUVl3bbMhI?si=nUyaME5kyxe6daGO) - [Debugging](https://youtu.be/Gxe_RfCRH-o?si=F-pAviLTIeL3-X13) - [Collaboration](https://youtu.be/pI0W-0Qcwmo?si=X7sALFXo2e-cY9FQ) ## Advanced The [Advanced](https://www.youtube.com/playlist?list=PLlET0GsrLUL5bxmx5c1H1Ms_OtOPYZIEG) course covers more complex workflows, more technical nodes, and enterprise features: - [Introduction and complex data flows](https://youtu.be/TFTLMQLozCI?si=vX0ooIH1RmbsgAkC) - [Advanced technical nodes](https://youtu.be/JM4jqYs4Fxo?si=YSNMeSay3C29C8HS) - [Pinning and editing output data](https://youtu.be/zcNB8L4_9mA?si=LZJ9DlYDQQxL7eeP) - [Sub-workflows](https://youtu.be/xr05Ie_Hkyg?si=rqqP8llttZPBjBeD) - [Error workflows](https://youtu.be/77Ewdaby47M?si=6YRlC4nMgG4hVQPV) - [Building a full example](https://youtu.be/wOKLEfeJLVE?si=YMW5t-PzPq7QKbPY) - [Handling files](https://youtu.be/2RAZYNigqOY?si=9x4vLX2Qo08xx8vC) - [Enterprise features](https://youtu.be/fXEubzmVJ_E?si=aK9_fI9tkF6F5CtB) # Advanced AI Build AI functionality using n8n: from creating your own chat bot, to using AI to process documents and data from other sources. Feature availability This feature is available on Cloud and self-hosted n8n, in version 1.19.4 and above. - **Get started** Work through the short tutorial to learn the basics of building AI workflows in n8n. [Tutorial](intro-tutorial/) - **Use a Starter Kit** Try n8n's Self-hosted AI Starter Kit to quickly start building AI workflows. [Self-hosted AI Starter Kit](../hosting/starter-kits/ai-starter-kit/) - **Explore examples and concepts** Browse examples and workflow templates to help you build. Includes explanations of important AI concepts. [Examples](examples/introduction/) - **How n8n uses LangChain** Learn more about how n8n builds on LangChain. [LangChain in n8n](langchain/overview/) - **Browse AI templates** Explore a wide range of AI workflow templates on the n8n website. [AI workflows on n8n.io](https://n8n.io/workflows/?categories=25) ## Related resources Related documentation and tools. ### Node types This feature uses [Cluster nodes](../integrations/builtin/cluster-nodes/): groups of [root](../integrations/builtin/cluster-nodes/root-nodes/) and [sub](../integrations/builtin/cluster-nodes/sub-nodes/) nodes that work together. [Cluster nodes](../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../glossary/#root-node-n8n) and one or more [sub-nodes](../glossary/#sub-node-n8n) that extend the functionality of the node. ### Workflow templates You can browse [workflow templates](../glossary/#template-n8n) in-app or on the n8n website [Workflows](https://n8n.io/workflows/?categories=25,26) page. Refer to [Templates](../workflows/templates/) for information on accessing templates in-app. ### Chat trigger Use the [n8n Chat Trigger](../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) to trigger a workflow based on chat interactions. ### Chatbot widget n8n provides a chatbot widget that you can use as a frontend for AI-powered chat workflows. Refer to the [@n8n/chat npm page](https://www.npmjs.com/package/@n8n/chat) for usage information. # AI Workflow Builder AI Workflow Builder enables you to create, refine, and debug workflows using natural language descriptions of your goals. It handles the entire workflow construction process, including node selection, placement, and configuration, thereby reducing the time required to build functional workflows. For details of pricing and availability of AI Workflow Builder, see [n8n Plans and Pricing](https://n8n.io/pricing/). ## Working with the builder 1. **Describe your workflow:** Either select an example prompt or describe your requirements in natural language. 1. **Monitor the build:** The builder provides real-time feedback through several phases. 1. **Review and refine the generated workflow:** Review required credentials and other parameters. Refine the workflow using prompts. ### Commands you can run in the builder - `/clear`: Clears the context for the LLM and lets you start from scratch ## Understanding credits ### How credits work Each time you send a message to the builder asking it to create or modify a workflow, that counts as one **interaction**, which is worth one **credit**. ✅ **Counts as an interaction** - Sending a message to create a new workflow - Asking the builder to modify an existing workflow - Clicking the **Execute and refine** button in the builder window after a workflow is built ❌ **Does NOT count as an interaction** - Messages that fail or produce generation errors - Requests you manually stop by clicking the stop button ### Getting more credits If you've used your monthly limit, you can upgrade to a higher plan. For details on plans and pricing, see [n8n Plans and Pricing](https://n8n.io/pricing/). ## AI model and data handling The following data are sent to the LLM: - Text prompts that you provide to create, refine, or debug the workflow - Node definitions, parameters, and connections and the current workflow definition. - Any mock execution data that is loaded when using the builder The following data are not sent: - Details of any credentials you use - Past executions of the workflow # Chat Hub ## Overview Chat Hub is a centralized AI chat interface where you can access multiple AI models, interact with n8n agents, and create your own agents. Chat Hub also introduces Chat user, a role that lets users interact with the chat interface without accessing n8n workflows. ## How to use To use Chat Hub, find the **Chat** option in the navigation bar, select a model, and start a conversation. ### Creating simple personal agents To make AI more reliable for simple, repetitive tasks, you can create Custom Agents with custom instructions. To create a simple personal agent: 1. Click on **Personal Agents** and then **+New Agent**. 1. Define the name, description, system prompt, preferred model, and access to tools. 1. Select **Save**. Once created, you can select the personal agent from the model selector. ### Using n8n workflows agents For more complex scenarios, use n8n workflow agents (built by you or your colleagues) to make your workflows available in Chat Hub. Right now, you can only use workflows that have a **Chat Trigger** with streaming enabled in the **Agent** node. To make your workflow available: 1. Open your selected workflow. 1. Open the **Chat Trigger**. Note Only chat triggers of the newest version will work. To get the newest chat trigger version, delete your existing chat trigger and insert a new one. 1. Enable the **Make Available in n8n Chat** option and set the name and description of the personal agent. 1. Make sure that your AI Agent node has the **Enable Streaming** option enabled. 1. Activate your workflow. Once defined, you can select your workflow from the model selector in Chat Hub. Your colleagues need access to the workflow by sharing it or having it in a project where they have at least viewer access. ## Managing permissions You can define which users can perform which actions through n8n's role system. Chat Hub also gives you more ways to control who uses what. ### Chat user role The Chat user is a role for people in your organization who want to use workflows without building them. Chat users only see the chat interface and can't add credentials or workflows by default. Chat users are only available on Starter, Pro, Business and Enterprise plans. ### Provider settings Admins can control which models and providers users can access in Chat Hub. This includes: - Enabling or disabling specific models and providers - Preventing users from adding their own models - Setting default credentials for each provider - Restricting users from adding their own credentials (through n8n's permission system) To manage these settings, go to **Settings > Chat** and edit the providers. ## Considerations and limitations 1. You can't add file knowledge when creating simple personal agents. 1. Tool selection is limited to a few options. 1. Only workflows with [Chat Trigger node](../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) and streaming-enabled [AI Agent node](../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) work as workflow agents. Your workflows must meet specific requirements. # Human-in-the-loop for AI tool calls You can require human approval before an AI Agent executes a specific tool. When a tool requires human review, the workflow pauses and waits for a person to take one of the following actions: - **Approve**: The tool executes with the input specified by the AI. - **Deny**: The action is canceled and doesn't run. This feature allows for selective oversight of tool use within AI workflows, making it easier to apply additional review to tools with higher risk, such as sending messages, modifying records, or deleting data. ## When to use human review Human-in-the-loop (HITL) review is useful when: - **Tools perform irreversible actions**: Deleting data, sending external communications, or making purchases. - **Compliance requirements exist**: Regulated industries may require human approval for certain automated actions. - **High-value decisions are involved**: Actions that have significant business impact benefit from human oversight. - **You're building trust in AI workflows**: Start with human review enabled, then reduce oversight as confidence grows. HITL can be applied to all tools connected to an AI Agent node, or just selected individual tools, offering more precise control than general output gating. ## How it works 1. The AI Agent determines it needs to use a tool that has human review enabled. 1. The workflow pauses and sends an approval request through your configured channel (such as Slack, Telegram, or the n8n Chat interface). 1. A human reviewer receives the request showing which tool the AI wants to use and with what parameters. 1. The reviewer either approves or denies the request. 1. If approved, the tool executes with the AI-specified input. If denied, the action is canceled and the AI is informed of the rejection. Different approval channels The review step can happen through a different channel than the main interaction. For example, you could have users interact with an AI agent through the n8n Chat interface, but route approval requests to a specific person in Slack. ## Setting up human review for tools ### Step 1: Open the Tools Panel In your workflow, click the **Tools** connector on an AI Agent node to open the Tools Panel. ### Step 2: Add a human review step 1. In the Tools Panel, find the **Human review** section. 1. Select your preferred approval channel from the available options. 1. Configure the approval channel with the appropriate credentials and settings. ### Step 3: Connect tools to the review step 1. Add the tools that require approval to the human review step's tool connector. 1. Configure each tool as you normally would. ## Available approval channels You can use any of the following services as your human review channel: | Channel | Description | | ------------------------------------------------------------------------------------------ | ------------------------------------------------- | | [Chat](../../integrations/builtin/core-nodes/n8n-nodes-langchain.chat/) | n8n's built-in chat interface | | [Slack](../../integrations/builtin/app-nodes/n8n-nodes-base.slack/) | Send approval requests to a Slack channel or DM | | [Discord](../../integrations/builtin/app-nodes/n8n-nodes-base.discord/) | Send approval requests to a Discord channel | | [Telegram](../../integrations/builtin/app-nodes/n8n-nodes-base.telegram/) | Send approval requests through Telegram | | [Microsoft Teams](../../integrations/builtin/app-nodes/n8n-nodes-base.microsoftteams/) | Send approval requests to a Teams channel or chat | | [Gmail](../../integrations/builtin/app-nodes/n8n-nodes-base.gmail/) | Send approval requests via email | | [WhatsApp Business Cloud](../../integrations/builtin/app-nodes/n8n-nodes-base.whatsapp/) | Send approval requests through WhatsApp | | [Google Chat](../../integrations/builtin/app-nodes/n8n-nodes-base.googlechat/) | Send approval requests to Google Chat | | [Microsoft Outlook](../../integrations/builtin/app-nodes/n8n-nodes-base.microsoftoutlook/) | Send approval requests via Outlook email | ## Using expressions in human review tools ### The $tool variable When configuring the human review step, you can use the `$tool` variable to construct a message for the reviewer that provides context about what the AI is trying to do. This variable has two properties: | Property | Description | | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `$tool.name` | The name of the tool the AI Agent is trying to call. This is the node name as shown on the canvas in n8n. | | `$tool.parameters` | The parameters the AI Agent is trying to use in the tool call. This includes any fields in the tool's input schema that are configured with `$fromAI()` expressions. | **Example message configuration:** ``` The AI wants to use {{ $tool.name }} with the following parameters: {{ JSON.stringify($tool.parameters, null, 2) }} ``` This helps reviewers understand exactly what action the AI is attempting before they approve or deny the request. ### Using $fromAI() in human review tools The [`$fromAI()` function](../examples/using-the-fromai-function/) works with tools connected to human review steps. This means the AI can dynamically specify tool parameters, and those AI-determined values are what the human reviewer sees and approves. ## System prompt best practices For the AI Agent to correctly interpret and handle denied tool call attempts, include information about the human review setup in your system prompt. System prompt configuration required Make sure you include the tool setup and human review steps in your system prompt. This helps the AI understand which tools require approval and how to respond gracefully when a tool call is denied. Consider including: - Which tools require human approval - What happens when approval is denied - How the AI should respond to rejections (for example, inform the user, suggest alternatives, or ask for clarification) ## Chaining and subagents When using an AI Agent as a tool for another AI Agent, human review steps in the subagent work correctly. ## Related resources - [AI Agent node](../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) - [Tools Agent](../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/tools-agent/) - [What is a tool in AI?](../examples/understand-tools/) - [Let AI specify tool parameters with $fromAI()](../examples/using-the-fromai-function/) # Build an AI chat agent with n8n Welcome to the introductory tutorial for building AI workflows with n8n. Whether you have used n8n before, or this is your first time, we will show you how the building blocks of AI workflows fit together and construct a working AI-powered chat agent which you can easily customize for your own purposes. Many people find it easier to take in new information in video format. This tutorial is based on one of n8n's popular videos, linked below. Watch the video or read the steps here, or both! ### What you will need - **n8n**: For this tutorial we recommend using the [n8n cloud](../../manage-cloud/overview/) service - there is a free trial for new users! For a self hosted service, refer to the [installation pages](../../hosting/installation/docker/). - **Credentials for a chat model**: This tutorial uses OpenAI, but you can easily use DeepSeek, Google Gemini, Groq, Azure, and others (see the [sub-nodes documentation](../../integrations/builtin/cluster-nodes/sub-nodes/) for more). ### What you will learn - AI concepts in n8n - How to use the AI Agent node - Working with Chat input - Connecting with AI models - Customising input - Observing the conversation - Adding persistence ## AI concepts in n8n If you're already familiar with AI, feel free to skip this section. This is a basic introduction to AI concepts and how they can be used in n8n workflows. An [AI agent](../../glossary/#ai-agent) builds on [Large Language Models (LLMs)](../../glossary/#large-language-model-llm). LLMs generate text based on input by predicting the next word. They can be used to select the best tool to achieve a task, or even simulate complex decision-making, but they can't act on decisions or use tools themselves. AI agents add goal-oriented functionality. They can use [tools](../../glossary/#ai-tool), act on their outputs, complete tasks and solve problems. In n8n, the AI agent is represented as a node with some extra connections. | Feature | LLM | AI Agent | | ------------------- | -------------------------- | ---------------------------------- | | Core Capability | Text generation | Goal-oriented task completion | | Decision-Making | Simulates choices in text | Selects and executes actions | | Uses Tools/APIs | No | Yes | | Workflow Complexity | Single-step | Multi-step | | Scope | Generates language | Performs complex, real-world tasks | | Example | LLM generating a paragraph | An agent scheduling an appointment | By incorporating the AI agent as a node, n8n can combine AI-driven steps with traditional programming for efficient, real-world workflows. For instance, simpler tasks, like validating an email address, do not require AI, whereas a complex tasks, like processing the *content* of an email or dealing with multimodal inputs (e.g., images, audio), are excellent uses of an AI agent. ## 1. Create a new workflow When you open n8n, you'll see either: - An empty workflow: if you have no workflows and you're logging in for the first time. Use this workflow. - The **Workflows** list on the **Overview** page. Select the **button** to create a new workflow. ## 2. Add a trigger node Every workflow needs somewhere to start. In n8n these are called ['trigger nodes'](../../glossary/#trigger-node-n8n). For this workflow, we want to start with a chat node. 1. Select **Add first step** or press `N` to open the node menu. 1. Search for **Chat Trigger**. n8n shows a list of nodes that match the search. 1. Select **Chat Trigger** to add the node to the canvas. n8n opens the node. 1. Close the node details view (Select **Back to canvas**) to return to the canvas. More about the Chat Trigger node... The trigger node generates output when there is an event causing it to trigger. In this case we want to be able to type in text to cause the workflow to run. In production, this trigger can be hooked up to a public chat interface as provided by n8n or embedded into another website. To start this simple workflow we will just use the built-in local chat interface to communicate, so no further setup is required. [View workflow file](/_workflows//advanced-ai/tutorials/chat_01.json) ## 3. Add an AI Agent Node The AI Agent node is the core of adding AI to your workflows. 1. Select the **Add node** connector on the trigger node to bring up the node search. 1. Start typing "AI" and choose the **AI agent** node to add it. 1. The editing view of the **AI agent** will now be displayed. 1. There are some fields which can be changed. As we're using the **Chat Trigger** node, the default setting for the source and specification of the prompt don't need to be changed. [View workflow file](/_workflows//advanced-ai/tutorials/chat_02.json) ## 4. Configure the node AI agents require a chat model to be attached to process the incoming prompts. 1. Add a chat model by clicking the plus button underneath the **Chat Model** connection on the **AI Agent** node (it's the first connection along the bottom of the node). 1. The search dialog will appear, filtered on 'Language Models'. These are the models with built-in support in n8n. For this tutorial we will use **OpenAI Chat Model**. 1. Selecting the **OpenAI Chat model** from the list will attach it to the **AI Agent** node and open the node editor. One of the parameters which can be changed is the 'Model'. Note that for the basic OpenAI accounts, only the 'gpt-4o-mini' model is allowed. Which chat model? As mentioned earlier, the LLM is the component which generates the text according to a prompt it is given. LLMs have to be created and trained, usually an intensive process. Different LLMS may have different capabilities or specialties, depending on the data they were trained with. ## 5. Add credentials (if needed) In order for n8n to communicate with the chat model, it will need some [credentials](../../credentials/) (login data giving it access to an account on a different online service). If you already have credentials set up for OpenAI, these should appear by default in the credentials selector. Otherwise you can use the Credentials selector to help you add a new credential. 1. To add a new credential, click on the text which says 'Select credential'. An option to add a new credential will appear 1. This credential just needs an API key. When adding credentials of any type, check the text to the right-hand side. In this case it has a handy link to take you straight to your OpenAI account to retrieve the API key. 1. The API key is just one long string. That's all you need for this particular credential. Copy it from the OpenAI website and paste it into the **API key** section. Keeping your credentials safe Credentials are private pieces of information issued by apps and services to authenticate you as a user and allow you to connect and share information between the app or service and the n8n node. The type of information required varies depending on the app/service concerned. You should be careful about sharing or revealing the credentials outside of n8n. ## 6. Test the node Now that the node is connected to the **Chat Trigger** and a chat model, we can test this part of the workflow. 1. Click on the 'Chat' button near the bottom of the canvas. This opens up a local chat window on the left and the AI agent logs on the right. 1. Type in a message and press `Enter`. You will now see the response from the chat model appear below your message. 1. The log window displays the inputs to and outputs from the AI Agent. Accessing the logs... You can access the logs for the AI node even when you aren't using the chat interface. Open up the **AI Agent** node and click on the **Logs** tab in the right hand panel. ## 7. Changing the prompt The logs in the previous step reveal some extra data - the system prompt. This is the default message that the **AI Agent** primes the chat model with. From the log you can see this is set to "You are a helpful assistant". We can however change this prompt to alter the behavior of the chat model. 1. Open the **AI Agent** node. In the bottom of the panel is a section labeled 'Options' and a selector labeled 'Add Option'. Use this to select 'System message' 1. The system message is now displayed. This is the same priming prompt we noticed before in the logs. Change the prompt to something else to prime the chat model in a different way. You could try something like "You are a brilliant poet who always replies in rhyming couplets" for example. 1. Close the node and return to the chat window. Repeat your message and notice how the output has changed. ## 8. Adding persistence The chat model is now giving us useful output, but there is something wrong with it which will become apparent when you try to have a conversation. 1. Use the chat and tell the chat model your name, for example "Hi there, my name is Nick". 1. Wait for the response, then type the message "What's my name?". The AI will not be able to tell you, however apologetic it may seem. The reason for this is we are not saving the context. The AI Agent has no [memory](../../glossary/#ai-memory). 1. In order to remember what has happened in the conversation, the AI Agent needs to preserve context. We can do this by adding memory to the **AI Agent** node. On the canvas click on the on the bottom of the **AI Agent** node labeled "Memory". 1. From the panel which appears, select "Simple Memory". This will use the memory from the instance running n8n, and is usually sufficient for simple usage. The default value of 5 interactions should be sufficient here, but remember where this option is if you may want to change it later. 1. Repeat the exercise of having a conversation above, and see that the AI Agent now remembers your name. ## 9. Saving the workflow Before we leave the workflow editor, remember to save the workflow or all your changes will be lost. 1. Click on the "Save" button in the top right of the editor window. Your workflow will now be saved and you can return to it later to chat again or add new features. ## Congratulations! You have taken your first steps in building useful and effective workflows with AI. In this tutorial we have investigated the basic building blocks of an AI workflow, added an **AI Agent** and a chat model, and adjusted the prompt to get the kind of output we wanted. We also added memory so the chat could retain context between messages. [View workflow file](/_workflows//advanced-ai/tutorials/chat_complete.json) ## Next steps Now you have seen how to create a basic AI workflow, there are plenty of resources to build on that knowledge and plenty of examples to give you ideas of where to go next: - Learn more about AI concepts and view examples in [Examples and concepts](../examples/introduction/). - Browse AI [Workflow templates](https://n8n.io/workflows/?categories=25). - Find out how to [enhance the AI agent with tools](../examples/understand-tools/). # RAG in n8n ## What is RAG [Retrieval-Augmented Generation (RAG)](../../glossary/#ai-retrieval-augmented-generation-rag) is a technique that improves AI responses by combining language models with external data sources. Instead of relying solely on the model's internal training data, RAG systems retrieve relevant documents to [ground](../../glossary/#ai-groundedness) responses in up-to-date, domain-specific, or proprietary knowledge. RAG workflows typically rely on vector stores to manage and search this external data efficiently. ## What is a vector store? A [vector store](../../glossary/#ai-vector-store) is a special database designed to store and search high-dimensional vectors: numerical representations of text, images, or other data. When you upload a document, the vector store splits it into chunks and converts each chunk into a vector using an [embedding model](../../glossary/#ai-embedding). You can query these vectors using similarity searches, which construct results based on *semantic meaning*, rather than keyword matches. This makes vector stores a powerful foundation for RAG and other AI systems that need to retrieve and reason over large sets of knowledge. ## How to use RAG in n8n Start with a RAG template 👉 Try out RAG in n8n with the [RAG Starter Template](https://n8n.io/workflows/5010-rag-starter-template-using-simple-vector-stores-form-trigger-and-openai). The template includes two ready-made workflows: one for uploading files and one for querying them. ### Inserting data into your vector store Before your agent can access custom knowledge, you need to upload that data to a vector store: 1. Add the nodes needed to fetch your source data. 1. Insert a **Vector Store** node (e.g. the [Simple Vector Store](../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreinmemory/)) and choose the **Insert Documents** operation. 1. Select an **embedding model**, which converts your text into vector embeddings. Consult the FAQ for more information on [choosing the right embedding model](#how-do-i-choose-the-right-embedding-model). 1. Add a [Default Data Loader](../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) node, which splits your content into chunks. You can use the default settings or define your own chunking strategy: - **Character Text Splitter:** splits by character length. - **Recursive Character Text Splitter:** recursively splits by Markdown, HTML, code blocks or simple characters (recommended for most use cases). - **Token Text Splitter:** splits by token count. 1. (Optional) Add **metadata** to each chunk to enrich the context and allow better filtering later. ### Querying your data You can query the data in two main ways: using an agent or directly through a node. ### Using agents 1. Add an [agent](../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) to your workflow. 1. Add the vector store as a **tool** and give it a **description** to help the agent understand when to use it: - Set the **limit** to define how many chunks to return. - Enable **Include Metadata** to provide extra context for each chunk. 1. Add the same **embedding model** you used when inserting the data. Pro tip To save tokens on an expensive model, you can first use the [Vector Store Question Answer tool](../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) to retrieve relevant data, and only then pass the result to the Agent. To see this in action, check out [this template](https://n8n.io/workflows/5011-save-costs-in-rag-workflows-using-the-qanda-tool-with-multiple-models). ### Using the node directly 1. Add your vector store node to the canvas and choose the **Get Many** operation. 1. Enter a query or prompt: - Set a **limit** for how many chunks to return. - Enable **Include Metadata** if needed. ## FAQs ### How do I choose the right embedding model? The right embedding model differs from case to case. In general, smaller models (for example, `text-embedding-ada-002`) are faster and cheaper and thus ideal for short, general-purpose documents or lightweight RAG workflows. Larger models (for example, `text-embedding-3-large`) offer better semantic understanding. These are best for long documents, complex topics, or when accuracy is critical. ### What is the best text splitting for my use case? This again depends a lot on your data: - Small chunks (for example, 200 to 500 tokens) are good for fine-grained retrieval. - Large chunks may carry more context but can become diluted or noisy. Using the right overlap size is important for the AI to understand the context of the chunk. That's also why using the Markdown or Code Block splitting can often help to make chunks better. Another good approach is to add more context to it (for example, about the document where the chunk came from). If you want you can read more about this, you can check out [this great article from Anthropic](https://www.anthropic.com/news/contextual-retrieval). # Light evaluations Available on registered community and paid plans Light evaluations are available to registered community users and on all paid plans. ## What are light evaluations? When building your workflow, you often want to test it with a handful of examples to get a sense of how it performs and make improvements. At this stage of workflow development, looking over workflow outputs for each example is often enough. The benefits of setting up more [formal scoring or metrics](../metric-based-evaluations/) don't yet justify the effort. Light evaluation allows you to run the examples in a test dataset through your workflow one-by-one, writing the outputs back to your dataset. You can then examine those outputs next to each other, and visually compare them to the expected outputs (if you have them). ## How it works Credentials for Google Sheets Evaluations use data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a [Google Sheets credential](../../../integrations/builtin/credentials/google/). Light evaluations take place in the 'Editor' tab of your workflow, although you’ll find instructions on how to set it up in the 'Evaluations' tab. Steps: 1. Create a dataset 1. Wire the dataset up to the workflow 1. Write workflow outputs back to dataset 1. Run evaluation The following explanation will use a sample workflow that assigns a category and priority to incoming support tickets. ### 1. Create a dataset Create a data table or Google Sheet with a handful of examples for your workflow. Your dataset should contain columns for: - The workflow input - (Optional) The expected or correct workflow output - The actual output Leave the actual output column or columns blank, since you'll be filling them during the evaluation. A [sample dataset](https://docs.google.com/spreadsheets/d/1uuPS5cHtSNZ6HNLOi75A2m8nVWZrdBZ_Ivf58osDAS8/edit?gid=294497137#gid=294497137) for the support ticket classification workflow. ### 2. Wire the dataset up to your workflow #### Insert an evaluation trigger to pull in your dataset Each time the [evaluation trigger](../../../integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger/) runs, it will output a single item representing one row of your dataset. Clicking the 'Evaluate all' button to the left of the evaluation trigger will run your workflow multiple times in sequence, once for each row in your dataset. This is a special behavior of the evaluation trigger. While wiring the trigger up, you often only want to run it once. You can do this by either: - Setting the trigger's 'Max rows to process' to 1 - Clicking on the 'Execute node' button on the trigger (rather than the 'Evaluate all' button) #### Wire the trigger up to your workflow You can now connect the evaluation trigger to the rest of your workflow and reference the data that it outputs. At a minimum, you need to use the dataset’s input column(s) later in the workflow. If you have multiple triggers in your workflow you will need to [merge their branches together](../tips-and-common-issues/#combining-multiple-triggers). The support ticket classification workflow with the evaluation trigger added in and wired up. ### 3. Write workflow outputs back to dataset To populate the output column(s) of your dataset when the evaluation runs: - Insert the 'Set outputs' action of the [evaluation node](../../../integrations/builtin/core-nodes/n8n-nodes-base.evaluation/) - Wire it up to your workflow at a point after it has produced the outputs you're evaluating - In the node's parameters, map the workflow outputs into the correct dataset column The support ticket classification workflow with the 'set outputs' node added in and wired up. ### 4. Run evaluation Click on the **Execute workflow** button to the left of the evaluation trigger. The workflow will execute multiple times, once for each row of the dataset: Review the outputs of each execution in the data table or Google Sheet, and examine the execution details using the workflow's 'executions' tab if you need to. Once your dataset grows past a handful of examples, consider [metric-based evaluation](../metric-based-evaluations/) to get a numerical view of performance. See also [tips and common issues](../tips-and-common-issues/). # Metric-based evaluations Available on Pro and Enterprise plans Metric-based evaluation is available on Pro and Enterprise plans. Registered community and Starter plan users can also use it for a single workflow. ### What are metric-based evaluations? Once your workflow is ready for deployment, you often want to test it on more examples than [when you were building it](../light-evaluations/). For example, when production executions start to turn up edge cases, you want to add them to your test dataset so that you can make sure they're covered. For large datasets like the ones built from production data, it can be hard to get a sense of performance just by eyeballing the results. Instead, you must measure performance. Metric-based evaluations can assign one or more scores to each test run, which you can compare to previous runs. Individual scores get rolled up to measure performance on the whole dataset. This feature allows you to run evaluations that calculate metrics, track how those metrics change between runs and drill down into the reasons for those changes. Metrics can be deterministic functions (such as the distance between two strings) or you can calculate them using AI. Metrics often involve checking how far away the output is from a *reference output* (also called ground truth). To do so, the dataset must contain that reference output. Some evaluations don't need this reference output though (for example, checking text for sentiment or toxicity). ## How it works Credentials for Google Sheets Evaluations use data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a [Google Sheets credential](../../../integrations/builtin/credentials/google/). 1. Set up [light evaluation](../light-evaluations/) 1. Add metrics to workflow 1. Run evaluation and view results ### 1. Set up light evaluation Follow the [setup instructions](../light-evaluations/) to create a dataset and wire it up to your workflow, writing outputs back to the dataset. The following steps use the same support ticket classification workflow from the light evaluation docs: ### 2. Add metrics to workflow Metrics are dimensions used to score the output of your workflow. They often compare the actual workflow output with a reference output. It's common to use AI to calculate metrics, although it's sometimes possible to just use code. In n8n, metrics are always numbers. You need to add the logic to calculate the metrics for your workflow, at a point after it has produced the outputs. You can add any reference outputs your metric uses as a column in your dataset. This makes sure they it will be available in the workflow, since they will be output by the evaluation trigger. Use the **Set Metrics** operation to calculate: - **Correctness (AI-based)**: Whether the answer's meaning is consistent with a supplied reference answer. Uses a scale of 1 to 5, with 5 being the best. - **Helpfulness (AI-based)**: Whether the response answers the given query. Uses a scale of 1 to 5, with 5 being the best. - **String Similarity**: How close the answer is to the reference answer, measured character-by-character (edit distance). Returns a score between 0 and 1. - **Categorization**: Whether the answer is an exact match with the reference answer. Returns 1 when matching and 0 otherwise. - **Tools Used**: Whether the execution used tools or not. Returns a score between 0 and 1. You can also add custom metrics. Just calculate the metrics within the workflow and then map them into an Evaluation node. Use the **Set Metrics** operation and choose **Custom Metrics** as the Metric. You can then set the names and values for the metrics you want to return. For example: - [RAG document relevance](https://n8n.io/workflows/4273): when working with a vector database, whether the documents retrieved are relevant to the question. Calculating metrics can add latency and cost, so you may only want to do it when running an evaluation and avoid it when making a production execution. You can do this by putting the metric logic after a ['check if evaluating' operation](../../../integrations/builtin/core-nodes/n8n-nodes-base.evaluation/#check-if-evaluating). ### 3. Run evaluation and view results Switch to the **Evaluations** tab on your workflow and click the **Run evaluation** button. An evaluation will start. Once the evaluation has finished, it will display a summary score for each metric. You can see the results for each test case by clicking on the test run row. Clicking on an individual test case will open the execution that produced it (in a new tab). # Overview ## What are evaluations? Evaluation is a crucial technique for checking that your AI workflow is reliable. It can be the difference between a flaky proof of concept and a solid production workflow. It's important both in the building phase and after deploying to production. The foundation of evaluation is running a test dataset through your workflow. This dataset contains multiple test cases. Each test case contains a sample input for your workflow, and often includes the expected output(s) too. Evaluation allows you to: - **Test your workflow over a range of inputs** so you know how it performs on edge cases - **Make changes with confidence** without inadvertently making things worse elsewhere - **Compare performance** across different models or prompts The following video explains what evaluations are, why they're useful, and how they work: ## Why is evaluation needed? AI models are fundamentally different than code. Code is deterministic and you can reason about it. This is difficult to do with LLMs, since they're black boxes. Instead, you must *measure* LLM output by running data through them and observing the output. You can only build confidence that your model performs reliably after you have run it over multiple inputs that accurately reflect all the edge cases that it will have to deal with in production. ## Two types of evaluation ### Light evaluation (pre-deployment) Building a clean, comprehensive dataset is hard. In the initial building phase, it often makes sense to generate just a handful of examples. These can be enough to iterate the workflow to a releasable state (or a proof of concept). You can visually compare the results to get a sense of the workflow's quality, without setting up formal metrics. ### Metric-based evaluation (post-deployment) Once you deploy your workflow, it's easier to build a bigger, more representative dataset from production executions. When you discover a bug, you can add the input that caused it to the dataset. When fixing the bug, it's important to run the whole dataset over the workflow again as a [regression test](https://en.wikipedia.org/wiki/Regression_testing) to check that the fix hasn't inadvertently made something else worse. Since there are too many test cases to check individually, evaluations measure the quality of the outputs using a metric, a numeric value representing a particular characteristic. This also allows you to track quality changes between runs. ### Comparison of evaluation types | | Light evaluation (pre-deployment) | Metric-based evaluation (post-deployment) | | ------------------------------------------------ | --------------------------------- | ----------------------------------------- | | **Performance improvements with each iteration** | Large | Small | | **Dataset size** | Small | Large | | **Dataset sources** | Hand-generated AI-generated Other | Production executions AI-generated Other | | **Actual outputs** | Required | Required | | **Expected outputs** | Optional | Required (usually) | | **Evaluation** **metric** | Optional | Required | ## Learn more - [Light evaluations](../light-evaluations/): Perfect for evaluating your AI workflows against hand-selected test cases during development. - [Metric-based evaluations](../metric-based-evaluations/): Advanced evaluations to maintain performance and correctness in production by using scoring and metrics with large datasets. - [Tips and common issues](../tips-and-common-issues/): Learn how to set up specific evaluation use cases and work around common issues. # Tips and common issues ## Combining multiple triggers If you have another trigger in the workflow already, you have two potential starting points: that trigger and the [evaluation trigger](../../../integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger/). To make sure your workflow works as expected no matter which trigger executes, you will need to merge these branches together. Logic to merge two trigger branches together so that they have the same data format and can be referenced from a single node. To do so: 1. **Get the data format of the other trigger**: - Execute the other trigger. - Open it and navigate to the JSON view of its output pane. - Click the **copy** button on the right. 1. **Re-shape the evaluation trigger data to match**: - Insert an [Edit Fields (Set) node](../../../integrations/builtin/core-nodes/n8n-nodes-base.set/) after the evaluation trigger and connect them together. - Change its mode to **JSON**. - Paste your data into the 'JSON' field, removing the `[` and `]` on the first and last lines. - Switch the field type to **Expression**. - Map in the data from the trigger by dragging it from the input pane. - For strings, make sure to replace the entire value (including the quotes) and add `.toJsonString()` to the end of the expression. 1. **Merge the branches using a 'No-op' node**: Insert a [No-op node](../../../integrations/builtin/core-nodes/n8n-nodes-base.noop/) and wire both the other trigger and the Set node up to it. The 'No-op' node just outputs whatever input it receives. 1. **Reference the 'No-op' node outputs in the rest of the workflow**: Since both paths will flow through this node with the same format, you can be sure that your input data will always be there. ## Avoiding evaluation breaking the chat n8n's internal chat reads the output data of the last executed node in the workflow. After adding an evaluation node with the ['set outputs' operation](../../../integrations/builtin/core-nodes/n8n-nodes-base.evaluation/#set-outputs), this data may not be in the expected format, or even contain the chat response. The solution is to add an extra branch coming out of your agent. [Lower branches execute later](../../../flow-logic/execution-order/) in n8n, which means any node you attach to this branch will execute last. You can use a no-op node here since it only needs to pass the agent output through. ## Accessing tool data when calculating metrics Sometimes you need to know what happened in executed sub-nodes of an agent, for example to check whether it executed a tool. You can't reference these nodes directly with expressions, but you can enable the **Return intermediate steps** option in the agent. This will add an extra output field called `intermediateSteps` which you can use in later nodes: ## Multiple evaluations in the same workflow You can only have one evaluation set up per workflow. In other words, you can only have one evaluation trigger per workflow. Even so, you can still test different parts of your workflow with different evaluations by putting those parts in [sub-workflows](../../../flow-logic/subworkflows/) and evaluating each sub-workflow. ## Dealing with inconsistent results Metrics can often have noise: they may be different across evaluation runs of the exact same workflow. This is because the workflow itself may return different results, or any LLM-based metrics might have natural variation in them. You can compensate for this by duplicating the rows of your dataset, so that each row appears more than once in the dataset. Since this means that each input will effectively be running multiple times, it will smooth out any variations. # Demonstration of key differences between agents and chains In this workflow you can choose whether your chat query goes to an [agent](../../../glossary/#ai-agent) or [chain](../../../glossary/#ai-chain). It shows some of the ways that agents are more powerful than chains. [View workflow file](/_workflows/advanced-ai/examples/agents_vs_chains.json) ## Key features This workflow uses: - [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/): start your workflow and respond to user chat interactions. The node provides a customizable chat interface. - [Switch node](../../../integrations/builtin/core-nodes/n8n-nodes-base.switch/): directs your query to either the agent or chain, depending on which you specify in your query. If you say "agent" it sends it to the agent. If you say "chain" it sends it to the chain. - [Agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/): the Agent node interacts with other components of the workflow and makes decisions about what [tools](../../../glossary/#ai-tool) to use. - [Basic LLM Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/): the Basic LLM Chain node supports chatting with a connected LLM, but doesn't support [memory](../../../glossary/#ai-memory) or tools. ## Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # Call an API to fetch data Use n8n to bring data from any [API](../../../glossary/#api) to your AI. This workflow uses the [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) to provide the chat interface, and the [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) to call a second workflow that calls the API. The second workflow uses AI functionality to refine the API request based on the user's query. [View workflow file](/_workflows/advanced-ai/examples/let_your_ai_call_an_api.json) ## Key features This workflow uses: - [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/): start your workflow and respond to user chat interactions. The node provides a customizable chat interface. - [Agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/): the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use. - [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/): plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). The AI model uses the tool to access information beyond its built-in dataset. - A [Basic LLM Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/) with an [Auto-fixing Output Parser](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserautofixing/) and [Structured Output Parser](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserstructured/) to read the user's query and set parameters for the API call based on the user input. ## Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # Chat with a Google Sheet using AI Use n8n to bring your own data to AI. This workflow uses the [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) to provide the chat interface, and the [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) to call a second workflow that queries Google Sheets. [View workflow file](/_workflows/advanced-ai/examples/chat_with_google_sheets_docs_version.json) ## Key features This workflow uses: - [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/): start your workflow and respond to user chat interactions. The node provides a customizable chat interface. - [Agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/): the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use. - [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/): plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). The AI model uses the tool to access information beyond its built-in dataset. ## Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # Have a human fallback for AI workflows This is a workflow that tries to answer user queries using the standard GPT-4 model. If it can't answer, it sends a message to Slack to ask for human help. It prompts the user to supply an email address. This workflow uses the [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) to provide the chat interface, and the [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) to call a second workflow that handles checking for email addresses and sending the Slack message. [View workflow file](/_workflows/advanced-ai/examples/ask_a_human.json) ## Key features This workflow uses: - [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/): start your workflow and respond to user chat interactions. The node provides a customizable chat interface. - [Agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/): the key piece of the AI workflow. The Agent interacts with other components of the workflow and makes decisions about what tools to use. - [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/): plug in n8n workflows as custom tools. In AI, a tool is an interface the AI can use to interact with the world (in this case, the data provided by your workflow). It allows the AI model to access information beyond its built-in dataset. ## Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # Advanced AI examples and concepts This section provides explanations of important AI concepts, and workflow templates that highlight those concepts, with explanations and configuration guides. The examples cover common use cases and highlight different features of advanced AI in n8n. - **Agents and chains** Learn about [agents](../../../glossary/#ai-agent) and [chains](../../../glossary/#ai-chain) in AI, including exploring key differences using the example workflow. [What's a chain in AI?](../understand-chains/)\ [What's an agent in AI?](../understand-agents/)\ [Demonstration of key differences between agents and chains](../agent-chain-comparison/) - **Call n8n Workflow Tool** Learn about [tools](../../../glossary/#ai-tool) in AI, then explore examples that use n8n workflows as custom tools to give your AI workflow access to more data. [What's a tool in AI?](../understand-tools/)\ [Chat with Google Sheets](../data-google-sheets/)\ [Call an API to fetch data](../api-workflow-tool/)\ [Set up a human fallback](../human-fallback/)\ [Let AI specify tool parameters with `$fromAI()`](../using-the-fromai-function/) - **Vector databases** Learn about [vector databases](../../../glossary/#ai-vector-store) in AI, along with related concepts including [embeddings](../../../glossary/#ai-embedding) and retrievers. [What's a vector database?](../understand-vector-databases/)\ [Populate a Pinecone vector database from a website](../vector-store-website/) - **Memory** Learn about [memory](../../../glossary/#ai-memory) in AI. [What's memory in AI?](../understand-memory/) - **AI workflow templates** You can browse AI templates, included community contributions, on the n8n website. [Browse all AI templates](https://n8n.io/workflows/?categories=25) # What's an agent in AI? One way to think of an [agent](../../../glossary/#ai-agent) is as a [chain](../understand-chains/) that knows how to make decisions. Where a chain follows a predetermined sequence of calls to different AI components, an agent uses a language model to determine which actions to take. Agents are the part of AI that act as decision-makers. They can interact with other agents and [tools](../../../glossary/#ai-tool). When you send a query to an agent, it tries to choose the best tools to use to answer. Agents adapt to your specific queries, as well as the prompts that configure their behavior. ## Agents in n8n n8n provides one Agent node, which can act as different types of agent depending on the settings you choose. Refer to the [Agent node documentation](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) for details on the available agent types. When you execute a workflow containing an agent, the agent runs multiple times. For example, it may do an initial setup, followed by a run to call a tool, then another run to evaluate the tool response and respond to the user. # What's a chain in AI? [Chains](../../../glossary/#ai-chain) bring together different components of AI to create a cohesive system. They set up a sequence of calls between the components. These components can include models and [memory](../../../glossary/#ai-memory) (though note that in n8n chains can't use memory). ## Chains in n8n n8n provides three chain nodes: - [Basic LLM Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/): use to interact with an LLM, without any additional components. - [Question and Answer Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/): can connect to a [vector store](../../../glossary/#ai-vector-store) using a retriever, or to an n8n workflow using the Workflow Retriever node. Use this if you want to create a workflow that supports asking questions about specific documents. - [Summarization Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainsummarization/): takes an input and returns a summary. There's an important difference between chains in n8n and in other tools such as LangChain: none of the chain nodes support memory. This means they can't remember previous user queries. If you use LangChain to code an AI application, you can give your application memory. In n8n, if you need your workflow to support memory, use an agent. This is essential if you want users to be able to have a natural ongoing conversation with your app. # What's memory in AI? Memory is a key part of AI chat services. The [memory](../../../glossary/#ai-memory) keeps a history of previous messages, allowing for an ongoing conversation with the AI, rather than every interaction starting fresh. ## AI memory in n8n To add memory to your AI workflow you can use either: - [Simple Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/): stores a customizable length of chat history for the current session. This is the easiest to get started with. - One of the memory services that n8n provides nodes for. These include: - [Motorhead](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymotorhead/) - [Redis Chat Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryredischat/) - [Postgres Chat Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorypostgreschat/) - [Xata](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryxata/) - [Zep](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryzep/) If you need to do advanced AI memory management in your workflows, use the [Chat Memory Manager](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymanager/) node. This node is useful when you: - Can't add a memory node directly. - Need to do more complex memory management, beyond what the memory nodes offer. For example, you can add this node to check the memory size of the Agent node's response, and reduce it if needed. - Want to inject messages to the AI that look like user messages, to give the AI more context. # What's a tool in AI? In AI, 'tools' has a specific meaning. Tools act like addons that your AI can use to access extra context or resources. Here are a couple of other ways of expressing it: > Tools are interfaces that an agent can use to interact with the world ([source](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/)) > We can think of these tools as being almost like functions that your AI model can call ([source](https://www.udemy.com/course/chatgpt-and-langchain-the-complete-developers-masterclass/)) ## AI tools in n8n n8n provides tool [sub-nodes](../../../glossary/#sub-node-n8n) that you can connect to your [AI agent](../../../glossary/#ai-agent). As well as providing some popular tools, such as [Wikipedia](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolwikipedia/) and [SerpAPI](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/), n8n provides three especially powerful tools: - [Call n8n Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/): use this to load any n8n workflow as a tool. - [Custom Code Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolcode/): write code that your agent can run. - [HTTP Request Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolhttprequest/): make calls to fetch a website or data from an API. The next three examples highlight the Call n8n Workflow Tool: - [Chat with Google Sheets](../data-google-sheets/) - [Call an API to fetch data](../api-workflow-tool/) - [Set up a human fallback](../human-fallback/) You can also learn how to [let AI dynamically specify parameters for tools with the `$fromAI()` function](../using-the-fromai-function/). # What are vector databases? Vector databases store information as numbers: > A vector database is a type of database that stores data as high-dimensional vectors, which are mathematical representations of features or attributes. ([source](https://learn.microsoft.com/en-us/semantic-kernel/memories/vector-db)) This enables fast and accurate similarity searches. With a vector database, instead of using conventional database queries, you can search for relevant data based on semantic and contextual meaning. ## A simplified example A vector database could store the sentence "n8n is a source-available automation tool that you can self-host", but instead of storing it as text, the vector database stores an array of dimensions (numbers between 0 and 1) that represent its features. This doesn't mean turning each letter in the sentence into a number. Instead, the vectors in the vector database describe the sentence. Suppose that in a vector store `0.1` represents `automation tool`, `0.2` represents `source available`, and `0.3` represents `can be self-hosted`. You could end up with the following vectors: | Sentence | Vector (array of dimensions) | | ---------------------------------------------------------------- | ---------------------------- | | n8n is a source-available automation tool that you can self-host | [0.1, 0.2, 0.3] | | Zapier is an automation tool | [0.1] | | Make is an automation tool | [0.1] | | Confluence is a wiki tool that you can self-host | [0.3] | This example is very simplified In practice, vectors are far more complex. A vector can range in size from tens to thousands of dimensions. The dimensions don't have a one-to-one relationship to a single feature, so you can't translate individual dimensions directly into single concepts. This example gives an approximate mental model, not a true technical understanding. ## Demonstrating the power of similarity search Qdrant provides [vector search demos](https://qdrant.tech/demo/) to help users understand the power of vector databases. The [food discovery demo](https://food-discovery.qdrant.tech/) shows how a vector store can help match pictures based on visual similarities. > This demo uses data from Delivery Service. Users may like or dislike the photo of a dish, and the app will recommend more similar meals based on how they look. It's also possible to choose to view results from the restaurants within the delivery radius. ([source](https://qdrant.tech/demo/)) For full technical details, refer to the [Qdrant demo-food-discovery GitHub repository](https://github.com/qdrant/demo-food-discovery). ## Embeddings, retrievers, text splitters, and document loaders Vector databases require other tools to function: - Document loaders and text splitters: document loaders pull in documents and data, and prepare them for [embedding](../../../glossary/#ai-embedding). Document loaders can use text splitters to break documents into chunks. - Embeddings: these are the tools that turn the data (text, images, and so on) into vectors, and back into raw data. Note that n8n only supports text embeddings. - Retrievers: retrievers fetch documents from vector databases. You need to pair them with an embedding to translate the vectors back into data. # Let AI specify the tool parameters When configuring [tools](../../../glossary/#ai-tool) connected to the Tools Agent, many parameters can be filled in by the AI model itself. The AI model will use the context from the task and information from other connected tools to fill in the appropriate details. There are two ways to do this, and you can switch between them. ## Let the model fill in the parameter Each appropriate parameter field in the tool's editing dialog has an extra button at the end: On activating this button, the [AI Agent](../../../glossary/#ai-agent) will fill in the expression for you, with no need for any further user input. The field itself is filled in with a message indicating that the parameter has been defined automatically by the model. If you want to define the parameter yourself, click on the 'X' in this box to revert to user-defined values. Note that the 'expression' field will now contain the expression generated by this feature, though you can now edit it further to add extra details as described in the following section. Warning Activating this feature will overwrite any manual definition you may have already added. ## Use the `$fromAI()` function The `$fromAI()` function uses AI to dynamically fill in parameters for tools connected to the [Tools AI agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/tools-agent/). Only for tools The `$fromAI()` function is only available for tools connected to the AI Agent node. The `$fromAI()` function doesn't work with the [Code](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolcode/) tool or with [other non-tool cluster sub-nodes](../../../integrations/builtin/cluster-nodes/sub-nodes/). To use the `$fromAI()` function, call it with the required `key` parameter: ``` {{ $fromAI('email') }} ``` The `key` parameter and other arguments to the `$fromAI()` function aren't references to existing values. Instead, think of these arguments as hints that the AI model will use to populate the right data. For instance, if you choose a key called `email`, the AI Model will look for an email address in its context, other tools, and input data. In chat workflows, it may ask the user for an email address if it can't find one elsewhere. You can optionally pass other parameters like `description` to give extra context to the AI model. ### Parameters The `$fromAI()` function accepts the following parameters: | Parameter | Type | Required? | Description | | -------------- | ------ | --------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `key` | string | | A string representing the key or name of the argument. This must be between 1 and 64 characters in length and can only contain lowercase letters, uppercase letters, numbers, underscores, and hyphens. | | `description` | string | | A string describing the argument. | | `type` | string | | A string specifying the data type. Can be string, number, boolean, or json (defaults to string). | | `defaultValue` | any | | The default value to use for the argument. | ### Examples As an example, you could use the following `$fromAI()` expression to dynamically populate a field with a name: ``` $fromAI("name", "The commenter's name", "string", "Jane Doe") ``` If you don't need the optional parameters, you could simplify this as: ``` $fromAI("name") ``` To dynamically populate the number of items you have in stock, you could use a `$fromAI()` expression like this: ``` $fromAI("numItemsInStock", "Number of items in stock", "number", 5) ``` If you only want to fill in parts of a field with a dynamic value from the model, you can use it in a normal expression as well. For example, if you want the model to fill out the `subject` parameter for an e-mail, but always pre-fix the generated value with the string 'Generated by AI:', you could use the following expression: ``` Generated by AI: {{ $fromAI("subject") }} ``` ### Templates You can see the `$fromAI()` function in action in the following [templates](../../../glossary/#template-n8n): - [Angie, Personal AI Assistant with Telegram Voice and Text](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) - [Automate Customer Support Issue Resolution using AI Text Classifier](https://n8n.io/workflows/2468-automate-customer-support-issue-resolution-using-ai-text-classifier/) - [Scale Deal Flow with a Pitch Deck AI Vision, Chatbot and QDrant Vector Store](https://n8n.io/workflows/2464-scale-deal-flow-with-a-pitch-deck-ai-vision-chatbot-and-qdrant-vector-store/) # Populate a Pinecone vector database from a website Use n8n to scrape a website, load the data into Pinecone, then query it using a chat workflow. This workflow uses the [HTTP node](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) to get website data, extracts the relevant content using the [HTML node](../../../integrations/builtin/core-nodes/n8n-nodes-base.html/), then uses the [Pinecone Vector Store node](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstorepinecone/) to send it to Pinecone. [View workflow file](/_workflows/advanced-ai/examples/populate_a_pinecone_vector_database_from_a_website.json) ## Key features This workflow uses: - [HTTP node](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/): fetches website data. - [HTML node](../../../integrations/builtin/core-nodes/n8n-nodes-base.html/): simplifies the data by extracting the main content from the page. - [Pinecone Vector Store node](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstorepinecone/) and [Embeddings OpenAI](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/): transform the data into vectors and store it in Pinecone. - [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) and [Question and Answer Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) to query the vector database. ## Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # LangChain learning resources You don't need to know details about LangChain to use n8n, but it can be helpful to learn a few concepts. This pages lists some learning resources that people at n8n have found helpful. The [LangChain documentation](https://docs.langchain.com/docs/) includes introductions to key concepts and possible use cases. Choose the [LangChain | Python](https://python.langchain.com/docs/get_started/introduction) or [LangChain | JavaScript](https://js.langchain.com/docs/get_started/introduction/) documentation for quickstarts, code examples, and API documentation. LangChain also provide [code templates](https://github.com/langchain-ai/langchain/tree/master/cookbook) (Python only), offering ideas for potential use cases and common patterns. [What Product People Need To Know About LangChain](https://www.commandbar.com/blog/langchain-guide) provides a list of terminology and concepts, explained with helpful metaphors. Aimed at a wide audience. If you prefer video, this [YouTube series by Greg Kamradt](https://youtu.be/_v_fgW2SkkQ?si=8Z2tfAoXnN3lXU9s) works through the LangChain documentation, providing code examples as it goes. n8n offers space to discuss LangChain on the [Discord](https://discord.gg/bAt54txhHg). Join to share your projects and discuss ideas with the community. # LangChain concepts in n8n This page explains how LangChain concepts and features map to n8n nodes. This page includes lists of the LangChain-focused nodes in n8n. You can use any n8n node in a workflow where you interact with LangChain, to link LangChain to other services. The LangChain features uses n8n's [Cluster nodes](../../../integrations/builtin/cluster-nodes/). n8n implements LangChain JS This feature is n8n's implementation of [LangChain's JavaScript framework](https://js.langchain.com/docs/get_started/introduction). ## Trigger nodes [Chat Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-langchain.chattrigger/) ## Cluster nodes [Cluster nodes](../../../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../../../glossary/#root-node-n8n) and one or more [sub-nodes](../../../glossary/#sub-node-n8n) that extend the functionality of the node. ### Root nodes Each cluster starts with one [root node](../../../glossary/#root-node-n8n). #### Chains A [chain](../../../glossary/#ai-chain) is a series of LLMs, and related tools, linked together to support functionality that can't be provided by a single LLM alone. Available nodes: - [Basic LLM Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainllm/) - [Retrieval Q&A Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainretrievalqa/) - [Summarization Chain](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.chainsummarization/) - [Sentiment Analysis](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.sentimentanalysis/) - [Text Classifier](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.text-classifier/) Learn more about [chaining in LangChain](https://js.langchain.com/docs/concepts/lcel). #### Agents > An [agent](../../../glossary/#ai-agent) has access to a suite of tools, and determines which ones to use depending on the user input. Agents can use multiple tools, and use the output of one tool as the input to the next. [Source](https://github.com/langchain-ai/langchainjs/blob/def3a26c054575e1ed40b9062087e8c0a8899633/docs/core_docs/docs/modules/agents/index.mdx) Available nodes: - [Agent](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.agent/) Learn more about [Agents in LangChain](https://js.langchain.com/docs/concepts/agents). #### Vector stores [Vector stores](../../../glossary/#ai-vector-store) store embedded data, and perform vector searches on it. - [Simple Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreinmemory/) - [PGVector Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstorepgvector/) - [Pinecone Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstorepinecone/) - [Qdrant Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoreqdrant/) - [Supabase Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstoresupabase/) - [Zep Vector Store](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.vectorstorezep/) Learn more about [Vector stores in LangChain](https://js.langchain.com/docs/concepts/vectorstores/). #### Miscellaneous Utility nodes. [LangChain Code](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.code/): import LangChain. This means if there is functionality you need that n8n hasn't created a node for, you can still use it. ### Sub-nodes Each root node can have one or more [sub-nodes](../../../glossary/#sub-node-n8n) attached to it. #### Document loaders Document loaders add data to your chain as documents. The data source can be a file or web service. Available nodes: - [Default Document Loader](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) - [GitHub Document Loader](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.documentgithubloader/) Learn more about [Document loaders in LangChain](https://js.langchain.com/docs/concepts/document_loaders). #### Language models [LLMs (large language models)](../../../glossary/#large-language-model-llm) are programs that analyze datasets. They're the key element of working with AI. Available nodes: - [Anthropic Chat Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatanthropic/) - [AWS Bedrock Chat Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatawsbedrock/) - [Cohere Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmcohere/) - [Hugging Face Inference Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmopenhuggingfaceinference/) - [Mistral Cloud Chat Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatmistralcloud/) - [Ollama Chat Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatollama/) - [Ollama Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmollama/) - [OpenAI Chat Model](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.lmchatopenai/) Learn more about [Language models in LangChain](https://js.langchain.com/docs/concepts/chat_models). #### Memory [Memory](../../../glossary/#ai-memory) retains information about previous queries in a series of queries. For example, when a user interacts with a chat model, it's useful if your application can remember and call on the full conversation, not just the most recent query entered by the user. Available nodes: - [Motorhead](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymotorhead/) - [Redis Chat Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryredischat/) - [Postgres Chat Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorypostgreschat/) - [Simple Memory](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorybufferwindow/) - [Xata](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryxata/) - [Zep](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memoryzep/) Learn more about [Memory in LangChain](https://langchain-ai.github.io/langgraphjs/concepts/memory/). #### Output parsers Output parsers take the text generated by an LLM and format it to match the structure you require. Available nodes: - [Auto-fixing Output Parser](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.outputparserstructured/) Learn more about [Output parsers in LangChain](https://js.langchain.com/docs/concepts/output_parsers/). #### Retrievers - [Contextual Compression Retriever](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievercontextualcompression/) - [MultiQuery Retriever](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievermultiquery/) - [Vector Store Retriever](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrievervectorstore/) - [Workflow Retriever](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.retrieverworkflow/) #### Text splitters Text splitters break down data (documents), making it easier for the LLM to process the information and return accurate results. Available nodes: - [Character Text Splitter](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplittercharactertextsplitter/) - [Recursive Character Text Splitter](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/) - [Token Splitter](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.textsplittertokensplitter/) n8n's text splitter nodes implements parts of [LangChain's text_splitter API](https://js.langchain.com/docs/concepts/text_splitters/). #### Tools Utility [tools](../../../glossary/#ai-tool). - [Calculator](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolcalculator/) - [Code Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolcode/) - [SerpAPI](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolserpapi/) - [Think Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolthink/) - [Vector Store Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolvectorstore/) - [Wikipedia](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolwikipedia/) - [Wolfram|Alpha](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolwolframalpha/) - [Workflow Tool](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) #### Embeddings > [Embeddings](../../../glossary/#ai-embedding) capture the "relatedness" of text, images, video, or other types of information. ([source](https://supabase.com/docs/guides/ai/concepts)) Available nodes: - [Embeddings AWS Bedrock](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsawsbedrock/) - [Embeddings Cohere](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingscohere/) - [Embeddings Google PaLM](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsgooglepalm/) - [Embeddings Hugging Face Inference](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingshuggingfaceinference/) - [Embeddings Mistral Cloud](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsmistralcloud/) - [Embeddings Ollama](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsollama/) - [Embeddings OpenAI](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.embeddingsopenai/) Learn more about [Text embeddings in LangChain](https://js.langchain.com/docs/concepts/embedding_models/). #### Miscellaneous - [Chat Memory Manager](../../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.memorymanager/) # Use LangSmith with n8n [LangSmith](https://www.langchain.com/langsmith) is a developer platform created by the LangChain team. You can connect your n8n instance to LangSmith to record and monitor runs in n8n, just as you can in a LangChain application. Feature availability Self-hosted n8n only. ## Connect your n8n instance to LangSmith 1. [Log in to LangSmith](https://smith.langchain.com/settings) and get your API key. 1. Set the LangSmith environment variables: | Variable | Value | | -------------------------------- | ----------------------------------------------- | | `LANGCHAIN_ENDPOINT` | `"https://api.smith.langchain.com"` | | `LANGCHAIN_TRACING_V2` | `true` | | `LANGCHAIN_API_KEY` | Set this to your API key | | `LANGCHAIN_PROJECT` | Optional project name (defaults to `"default"`) | | `LANGCHAIN_CALLBACKS_BACKGROUND` | `true` (asynchronous trace upload) | Note If you just created your LangSmith account, you will see a project named **"default"** only after the first trace is sent from n8n.\ All traces go to this project unless you set `LANGCHAIN_PROJECT` to a different name. Note Traces may appear with a short delay because `LANGCHAIN_CALLBACKS_BACKGROUND` defaults to asynchronous submission.\ Set it to `false` if you prefer synchronous uploads for debugging. Set the variables so that they're available globally in the environment where you host your n8n instance. You can do this in the same way as the rest of your general configuration. 1. Restart n8n. For information on using LangSmith, refer to [LangSmith's documentation](https://docs.smith.langchain.com/). # LangChain in n8n n8n provides a collection of nodes that implement LangChain's functionality. The LangChain nodes are configurable, meaning you can choose your preferred agent, LLM, memory, and so on. Alongside the LangChain nodes, you can connect any n8n node as normal: this means you can integrate your LangChain logic with other data sources and services. - [Learning resources](../langchain-learning-resources/): n8n's documentation for LangChain assumes you're familiar with AI and LangChain concepts. This page provides links to learning resources. - [LangChain concepts and features in n8n](../langchain-n8n/): how n8n represents LangChain concepts and features. # Accessing and using n8n MCP server Connect supported MCP clients to your n8n workflows through n8n's built-in MCP server. The server allows clients such as Lovable or Claude Desktop to connect securely to an n8n instance. Once connected, these clients can: - Search for your workflows - Interact with workflows marked as available in MCP - Trigger and test exposed workflows - Create and edit workflows and data tables ## Difference between instance-level MCP access and MCP Server Trigger node Instance-level MCP access lets you create one connection per n8n instance, use centralized authentication, and choose which workflows to enable for access. Enabled workflows are easy to find and run without extra setup for each workflow. In comparison, you configure an MCP Server Trigger node inside a single workflow. This node exposes tools only from that workflow, a useful approach when you want to craft a specific MCP server behavior within one workflow. ### Key considerations when using instance-level MCP access - MCP supports two types of workflow interactions: running existing workflows with the workflow execution tools, and building or editing workflows (v2.13 onward). - It doesn’t provide blanket exposure to all workflows in your instance. You must enable MCP at the instance level and then enable each workflow individually. The only exception here is the `search_workflows` tool, which is able to access all workflows current user has access to but it will only be able to surface previews, not the full workflow data. - It's not scoped to each MCP client. All clients you connect (for example, Claude Desktop and ChatGPT) can see all workflows you've enabled for MCP access. You can't restrict specific workflows to specific clients. On a user level, visibility remains user-scoped: users can only see MCP-enabled workflows they have access to. - Most MCP tools work on unpublished workflows. The exception is `execute_workflow`, which defaults to production mode and runs the published version of a workflow. It also supports a `manual` execution mode to run the current (unpublished) version. ## Enabling MCP access ### For Cloud and self-hosted instances 1. Navigate to **Settings > Instance-level MCP** 1. Toggle **Enable MCP access** (requires instance owner or admin permissions). Once enabled, you'll see: 1. List of workflows exposed to MCP clients 1. List of connected OAuth clients 1. Main MCP toggle to enable/disable instance-level access 1. *Connection details* button that shows detailed instructions for connecting MCP clients **To disable:** Toggle the main MCP toggle off. Environment variables (self-hosted only) On self-hosted instances, you can also manage MCP settings using environment variables. See [Manage instance settings using environment variables](../../../hosting/configuration/settings-env-vars/#mcp). ### For self-hosted: Complete disablement To remove the feature entirely, set the environment variable: `N8N_DISABLED_MODULES=mcp` This action removes MCP endpoints and hides all related UI elements. ## Setting up MCP authentication The **Connection details** popup menu provides two authentication options for MCP clients: - **OAuth2** - **Access Token** ### Using OAuth2 Copy your instance server URL from the **OAuth** tab and use it to configure your MCP client. After connecting, the client will redirect you to n8n to authorize access. #### Revoking client access To revoke access for connected MCP clients: 1. Navigate to **Settings > Instance-level MCP**. 1. Switch to the **Connected clients** tab. You should see a table of connected OAuth clients. 1. Use the action menu in each client's row to revoke access for specific clients. ### Using Access Token Use your instance server URL and your personal MCP Access Token from the **Access Token** tab on the *Connection details* menu. When you first visit the **MCP Access page**, n8n automatically generates a personal MCP Access Token tied to your user account. Info Copy your token right away. On future visits, you'll only see a redacted value and the copy button will be disabled. #### Rotating your token If you lose your token or need to rotate it: 1. Navigate to **Settings > Instance-level MCP**. 1. Open the *Connection details* menu by clicking the button in the top-right corner. 1. Switch to the **Access Token** tab. 1. Generate a new token using the button next to the redacted token value. n8n revokes the previous token when you generate a new one. 1. Update all connected MCP clients with the new value. ## Exposing workflows to MCP clients By default, no workflows are visible to MCP clients. You must explicitly enable MCP access for each workflow you want to expose. ### Enabling access #### Option 1: From MCP settings page (available from n8n v2.2.0) 1. Click the **Enable workflows** button (in the workflows table header or in the table's empty state) 1. Search for the desired workflow (by name or description) and select it from the list 1. Click **Enable** button to confirm #### Option 2: From the workflow editor 1. Open the workflow. 1. Click the main workflow menu (`...`) in the top-right corner. 1. Select **Settings**. 1. Toggle **Available in MCP**. #### Option 3: From the workflows list 1. Go to **Workflows**. 1. Open the menu on a workflow card. 1. Select **Enable MCP access**. ### Managing access The **Instance-level MCP** settings page shows all workflows available to MCP clients. From this list you can: - Open a workflow, its home project or parent folder directly - Revoke access using the action menu (or use **Disable MCP access** from the workflow card menu) - Update workflow description using the action menu (or use the menu in the workflow editor) - Enable access for more workflows using the **Enable workflows** button (available from n8n v2.2.0) ### Workflow descriptions To help MCP clients identify workflows, you can add free-text descriptions as follows: 1. Option 1: From the **Instance-level MCP** page 1. Navigate to **Settings > Instance-level MCP**. 1. Make sure you are on the **Workflows** tab. 1. Use the action menu in the desired workflow's row and select the **Edit description** action. 1. Alternatively, click the description text directly to open the edit dialog. 1. Option 2: From the workflow editor 1. Open the workflow. 1. Click the main workflow menu (`...`) in the top-right corner. 1. Select **Edit description**. ## Tools and resources Tip Consider using coding agents (such as Claude Code or Google ADK agents) instead of chat clients as your MCP clients. Coding agents are optimized for generating and validating TypeScript code, making them ideal for building workflows programmatically. The n8n MCP server exposes tools for workflow management, workflow building, and data tables. For a complete list of available tools and their parameters, refer to the [MCP server tools reference](../mcp_tools_reference/). ## Examples #### Connecting Lovable to n8n MCP server 1. Configure MCP Server in Lovable (OAuth). - Navigate to your workspace  **Settings > Integrations**. - In the **MCP Servers** section, find **n8n** and click **Connect**. - Enter your n8n server URL (shown on the **MCP Access** page). - Save the connection. If successful, n8n redirects you to authorize Lovable. 1. Verify connectivity. - Once connected, Lovable can query for workflows with MCP access enabled. - **Example:** Asking Lovable to build a workflow UI that lists users and allows deleting them. #### Connecting Claude Desktop to n8n MCP server ##### Using OAuth2 1. Navigate to **Settings** > **Connectors** in Claude Desktop. 1. Click on **Add custom connector**. 1. Enter the following details: - **Name:** n8n MCP - **Remote MCP Server URL**: Your n8n base URL (shown on the **Instance-level MCP** page) 1. Save the connector. 1. When prompted, authorize Claude Desktop to access your n8n instance. ##### Using Access Token Add the following entry to your `claude_desktop_config.json` file: ``` "mcpServers": { "n8n-local": { "type": "http", "url": "https:///mcp-server/http", "headers": { "Authorization": "Bearer " } } } ``` Here, replace: - ``: Your n8n base URL (shown on the **Instance-level MCP** page) - ``: Your generated token ### Connecting Claude Code to n8n MCP server Use the following CLI command: ``` claude mcp add --transport http n8n-mcp https:///mcp-server/http \ --header "Authorization: Bearer " ``` Alternatively, add the following entry to your `claude.json` file: ``` { "mcpServers": { "n8n-local": { "type": "http", "url": "https:///mcp-server/http", "headers": { "Authorization": "Bearer " } } } } ``` Here, replace: - ``: Your n8n base URL (shown on the **Instance-level MCP** page) - ``: Your generated token ### Connecting Codex CLI to n8n MCP server Add the following entry to your `~/.codex/config.toml` file: ``` [mcp_servers.n8n_mcp] url = "https:///mcp-server/http" http_headers = { "authorization" = "Bearer " } ``` Here, replace: - ``: Your n8n base URL (shown on the **Instance-level MCP** page) - ``: Your generated token ### Connecting Google ADK agent to n8n MCP server Here's sample code to create an agent that connects to a remote n8n MCP server: ``` from google.adk.agents import Agent from google.adk.tools.mcp_tool import McpToolset from google.adk.tools.mcp_tool.mcp_session_manager import StreamableHTTPServerParams N8N_INSTANCE_URL = "https://localhost:5678" N8N_MCP_TOKEN = "YOUR_N8N_MCP_TOKEN" root_agent = Agent( model="gemini-2.5-pro", name="n8n_agent", instruction="Help users manage and execute workflows in n8n", tools=[ McpToolset( connection_params=StreamableHTTPServerParams( url=f"{N8N_INSTANCE_URL}/mcp-server/http", headers={ "Authorization": f"Bearer {N8N_MCP_TOKEN}", }, ), ) ], ) ``` For more details, see [Connect ADK agent to n8n](https://google.github.io/adk-docs/tools/third-party/n8n/). ## Troubleshooting If you encounter issues connecting MCP clients to your n8n instance, consider the following: - Ensure that your n8n instance is publicly accessible if you are using cloud-based MCP clients. - Verify that the MCP access is enabled in n8n settings. - Check that the workflows you want to access are marked as available in MCP. - Confirm that the authentication method (OAuth2 or Access Token) is correctly configured in your MCP client. - Review n8n server logs for any error messages related to MCP connections. # n8n MCP server tools reference This page describes all tools exposed by the instance-level MCP server. ______________________________________________________________________ ## Workflow management ### search_workflows Search for workflows with optional filters. Returns a preview of each workflow. #### Parameters | Name | Type | Required | Description | | ----------- | --------- | -------- | ------------------------------------- | | `query` | `string` | No | Filter by name or description | | `projectId` | `string` | No | Filter by project ID | | `limit` | `integer` | No | Limit the number of results (max 200) | #### Output | Field | Type | Description | | ----------------------- | ---------- | -------------------------------------------------------- | | `data` | `array` | List of workflow previews | | `data[].id` | `string` | The unique identifier of the workflow | | `data[].name` | \`string | null\` | | `data[].description` | \`string | null\` | | `data[].active` | \`boolean | null\` | | `data[].createdAt` | \`string | null\` | | `data[].updatedAt` | \`string | null\` | | `data[].triggerCount` | \`number | null\` | | `data[].scopes` | `string[]` | User permissions for this workflow | | `data[].canExecute` | `boolean` | Whether the user has permission to execute this workflow | | `data[].availableInMCP` | `boolean` | Whether the workflow is visible to MCP tools | | `count` | `integer` | Total number of workflows that match the filters | #### Notes - Column type is immutable (through MCP) after creation. - Maximum result limit is 200. - Includes user permission scopes for each workflow so MCP clients can get more info about what they can do with the workflow. - **IMPORTANT**: This tool is able to list all workflows a user has access to, regardless of their `Available in MCP` setting. ______________________________________________________________________ ### get_workflow_details Get detailed information about a specific workflow including trigger details. #### Parameters | Name | Type | Required | Description | | ------------ | -------- | -------- | ---------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to retrieve | #### Output | Field | Type | Description | | -------------------------- | ---------- | ------------------------------------------------------------------ | | `workflow` | `object` | Sanitized workflow data safe for MCP consumption | | `workflow.id` | `string` | Workflow ID | | `workflow.name` | \`string | null\` | | `workflow.active` | `boolean` | Whether the workflow is active | | `workflow.isArchived` | `boolean` | Whether the workflow is archived | | `workflow.versionId` | `string` | The current workflow version ID | | `workflow.activeVersionId` | \`string | null\` | | `workflow.triggerCount` | `number` | Number of triggers | | `workflow.createdAt` | \`string | null\` | | `workflow.updatedAt` | \`string | null\` | | `workflow.settings` | \`object | null\` | | `workflow.connections` | `object` | Workflow connections graph | | `workflow.nodes` | `array` | List of nodes (credentials stripped) | | `workflow.activeVersion` | \`object | null\` | | `workflow.tags` | `array` | Tags with `id` and `name` | | `workflow.meta` | \`object | null\` | | `workflow.parentFolderId` | \`string | null\` | | `workflow.description` | `string` | The description of the workflow | | `workflow.scopes` | `string[]` | User permissions for this workflow | | `workflow.canExecute` | `boolean` | Whether the user has permission to execute this workflow | | `triggerInfo` | `string` | Human-readable instructions describing how to trigger the workflow | #### Notes - Sensitive credential data is stripped from nodes before returning. - Includes active version details if the workflow is published. ______________________________________________________________________ ### execute_workflow Execute a workflow by ID by mapping data from user prompt to trigger inputs. Returns execution ID and status. This will perform 'full' workflow execution, without mocking or skipping any nodes. #### Parameters | Name | Type | Required | Default | Description | | --------------- | ---------- | -------------- | ------- | ------------------------------------------------------------------ | | `workflowId` | `string` | Yes | | The ID of the workflow to execute | | `executionMode` | \`"manual" | "production"\` | No | `"production"` | | `inputs` | `object` | No | | Inputs to provide to the workflow (discriminated union, see below) | **`inputs` variants (discriminated by `type`):** | Type | Fields | Description | | --------- | --------------------------------------------------- | -------------------------------------- | | `chat` | `chatInput: string` | Input for chat-based workflows | | `form` | `formData: Record` | Input data for form-based workflows | | `webhook` | `webhookData: { method?, query?, body?, headers? }` | Input data for webhook-based workflows | #### Output | Field | Type | Description | | ------------- | -------- | -------------------------------------------------------------------------------------------------------------------------------------- | | `executionId` | \`string | null\` | | `status` | `string` | The status of the execution. One of: `"success"`, `"error"`, `"running"`, `"waiting"`, `"canceled"`, `"crashed"`, `"new"`, `"unknown"` | | `error` | `string` | Error message if the execution failed | #### Notes - Only supports workflows with specific trigger node types: Webhook, Chat Trigger, Form Trigger, Manual Trigger, Schedule Trigger. - When `executionMode` is `"production"`, the workflow must have a published (active) version. - If there are multiple supported triggers in a workflow, MCP clients may only be able to use one (first one) of them to trigger the workflow when using workflow execution tools (not applicable to AI Workflow builder workflows). - Executing workflows with multi-step forms or any kind of human-in-the-loop interactions isn't supported. ______________________________________________________________________ ### get_execution Available from n8n v2.12.0 Get execution details by execution ID and workflow ID. By default returns metadata only. #### Parameters | Name | Type | Required | Description | | -------------- | ---------- | -------- | --------------------------------------------------------------------------------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow the execution belongs to | | `executionId` | `string` | Yes | The ID of the execution to retrieve | | `includeData` | `boolean` | No | Whether to include full execution result data. Defaults to false (metadata only). | | `nodeNames` | `string[]` | No | When `includeData` is true, return data only for these nodes. If omitted, data for all nodes is included. | | `truncateData` | `integer` | No | When `includeData` is true, limit the number of data items returned per node output. | #### Output | Field | Type | Description | | -------------------------- | --------- | --------------------------------------------------------------- | | `execution` | \`object | null\` | | `execution.id` | `string` | Execution ID | | `execution.workflowId` | `string` | Workflow ID | | `execution.mode` | `string` | Execution mode | | `execution.status` | `string` | Execution status | | `execution.startedAt` | \`string | null\` | | `execution.stoppedAt` | \`string | null\` | | `execution.retryOf` | \`string | null\` | | `execution.retrySuccessId` | \`string | null\` | | `execution.waitTill` | \`string | null\` | | `data` | `unknown` | Execution result data (only present when `includeData` is true) | | `error` | `string` | Error message if the request failed | #### Notes - Use lightweight metadata queries (default) when full execution data isn't needed. - Filtering by `nodeNames` and truncating via `truncateData` helps manage large result sets. ______________________________________________________________________ ### test_workflow Available from n8n v2.15.0 Test a workflow using pin data to bypass external services. Trigger nodes, nodes with credentials, and HTTP Request nodes are pinned (use simulated data). Other nodes (Set, If, Code, etc.) execute normally, including credential-free I/O nodes like Execute Command or file read/write nodes. #### Parameters | Name | Type | Required | Description | | ----------------- | ----------------------- | -------- | ---------------------------------------------------------------------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to test | | `pinData` | `Record` | Yes | Pin data for all workflow nodes. | | `triggerNodeName` | `string` | No | Optional name of the trigger node to start execution from. Defaults to the first trigger node. | #### Output | Field | Type | Description | | ------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------- | | `executionId` | \`string | null\` | | `status` | `string` | The status of the test execution. One of: `"success"`, `"error"`, `"running"`, `"waiting"`, `"canceled"`, `"crashed"`, `"new"`, `"unknown"` | | `error` | `string` | Error message if the execution failed | #### Notes - Can be used to test workflow logic without setting up credentials or hitting external services. - This tool executes workflows synchronously (waits for execution to finish). - Has an enforced MCP execution timeout (5 minutes). ______________________________________________________________________ ### prepare_test_pin_data Available from n8n v2.15.0 Prepare test pin data for a workflow. Trigger nodes, nodes with credentials, and HTTP Request nodes need pin data. Logic nodes (Set, If, Code, etc.) and credential-free I/O nodes (Execute Command, file read/write) execute normally without pin data. Returns JSON Schemas describing the expected output shape for each node that needs pin data. #### Parameters | Name | Type | Required | Description | | ------------ | -------- | -------- | ---------------------------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to generate test pin data for | #### Output | Field | Type | Description | | ----------------------------------- | ---------------------------- | ------------------------------------------------------------------------------------------------------------------- | | `nodeSchemasToGenerate` | `Record` | Nodes that need pin data. Keys are node names, values are JSON Schema objects describing the expected output shape. | | `nodesWithoutSchema` | `string[]` | Node names that need pin data but have no output schema. Use empty defaults `[{"json": {}}]` for each. | | `nodesSkipped` | `string[]` | Nodes that don't need pin data and will execute normally during the test. | | `coverage` | `object` | Coverage statistics | | `coverage.withSchemaFromExecution` | `number` | Nodes with schemas inferred from last successful execution output | | `coverage.withSchemaFromDefinition` | `number` | Nodes with schemas from node type definitions | | `coverage.withoutSchema` | `number` | Nodes with no data or schema | | `coverage.skipped` | `number` | Nodes that will execute normally (no pin data needed) | | `coverage.total` | `number` | Total number of enabled nodes | #### Notes - Schemas should be used to generate realistic sample data for `test_workflow`. ______________________________________________________________________ ### publish_workflow Available from n8n v2.12.0 Publish (activate) a workflow to make it available for production execution. This creates an active version from the current draft. #### Parameters | Name | Type | Required | Description | | ------------ | -------- | -------- | ------------------------------------------------------------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to publish | | `versionId` | `string` | No | Optional version ID to publish. If not provided, publishes the current draft version. | #### Output | Field | Type | Description | | ----------------- | --------- | ---------------------------------- | | `success` | `boolean` | Whether publishing succeeded | | `workflowId` | `string` | The workflow ID | | `activeVersionId` | \`string | null\` | | `error` | `string` | Error message if publishing failed | ______________________________________________________________________ ### unpublish_workflow Available from n8n v2.12.0 Unpublish (deactivate) a workflow to stop it from being available for production execution. #### Parameters | Name | Type | Required | Description | | ------------ | -------- | -------- | ----------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to unpublish | #### Output | Field | Type | Description | | ------------ | --------- | ------------------------------------ | | `success` | `boolean` | Whether unpublishing succeeded | | `workflowId` | `string` | The workflow ID | | `error` | `string` | Error message if unpublishing failed | ______________________________________________________________________ ### search_projects Available from n8n v2.14.0 Search for projects accessible to the current user. #### Parameters | Name | Type | Required | Description | | ------- | ------------ | -------- | -------------------------------------------------------- | | `query` | `string` | No | Filter projects by name (case-insensitive partial match) | | `type` | \`"personal" | "team"\` | No | | `limit` | `integer` | No | Limit the number of results (max 100) | #### Output | Field | Type | Description | | ------------- | ------------ | ------------------------------------ | | `data` | `array` | List of matching projects | | `data[].id` | `string` | The unique identifier of the project | | `data[].name` | `string` | The name of the project | | `data[].type` | \`"personal" | "team"\` | | `count` | `integer` | Total number of matching projects | #### Notes - Maximum result limit is 100. - This tool enables MCP clients to create workflows and data tables in a specific project. ______________________________________________________________________ ### search_folders Available from n8n v2.14.0 Search for folders within a project. #### Parameters | Name | Type | Required | Description | | ----------- | --------- | -------- | ------------------------------------------------------- | | `projectId` | `string` | Yes | The ID of the project to search folders in | | `query` | `string` | No | Filter folders by name (case-insensitive partial match) | | `limit` | `integer` | No | Limit the number of results (max 100) | #### Output | Field | Type | Description | | ----------------------- | --------- | ----------------------------------- | | `data` | `array` | List of matching folders | | `data[].id` | `string` | The unique identifier of the folder | | `data[].name` | `string` | The name of the folder | | `data[].parentFolderId` | \`string | null\` | | `count` | `integer` | Total number of matching folders | #### Notes - Maximum result limit is 100. - This tool enables MCP clients to create workflows in a specific folder. ______________________________________________________________________ ## Workflow builder ### get_sdk_reference Available from n8n v2.12.0 Get the n8n Workflow SDK reference documentation including patterns, expression syntax, and functions. #### Parameters | Name | Type | Required | Default | Description | | --------- | -------- | -------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------- | | `section` | `string` | No | `"all"` | Documentation section to retrieve. One of: `"patterns"`, `"expressions"`, `"functions"`, `"rules"`, `"import"`, `"guidelines"`, `"design"`, `"all"` | #### Output | Field | Type | Description | | ----------- | -------- | ------------------------------------------------------------- | | `reference` | `string` | SDK reference documentation content for the requested section | #### Notes - Should be called first before building any workflows. - Sections cover patterns, expression syntax, built-in functions, coding rules, import syntax, naming guidelines, and design guidance. ______________________________________________________________________ ### search_nodes Available from n8n v2.12.0 Search for n8n nodes by service name, trigger type, or utility function. Returns node IDs, discriminators (resource/operation/mode), and related nodes needed for `get_node_types` tool. #### Parameters | Name | Type | Required | Description | | --------- | ---------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `queries` | `string[]` | Yes (min 1) | Search queries -- service names (for example `"gmail"`, `"slack"`), trigger types (for example `"schedule trigger"`, `"webhook"`), or utility nodes (for example `"set"`, `"if"`, `"merge"`, `"code"`) | #### Output | Field | Type | Description | | --------- | -------- | ------------------------------------------------------------------------ | | `results` | `string` | Search results with matching node IDs, discriminators, and related nodes | ______________________________________________________________________ ### get_node_types Available from n8n v2.12.0 Get TypeScript type definitions for n8n nodes. Returns exact parameter names and structures. #### Parameters | Name | Type | Required | Description | | --------- | ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------ | | `nodeIds` | `array` | Yes (min 1) | Array of node IDs. Each element can be a plain string (for example `"n8n-nodes-base.gmail"`) or an object with discriminators (see below). | **Node ID object format:** | Field | Type | Required | Description | | ----------- | -------- | -------- | ------------------------------------------------------- | | `nodeId` | `string` | Yes | The node type ID (for example `"n8n-nodes-base.gmail"`) | | `version` | `string` | No | Specific version (for example `"2.1"`) | | `resource` | `string` | No | Resource discriminator (for example `"message"`) | | `operation` | `string` | No | Operation discriminator (for example `"send"`) | | `mode` | `string` | No | Mode discriminator | #### Output | Field | Type | Description | | ------------- | -------- | --------------------------------------------------- | | `definitions` | `string` | TypeScript type definitions for the requested nodes | #### Notes - Critical for correct node configuration - MCP clients should always call before writing workflow code. - Supports both simple string node IDs and objects with discriminators for multi-variant nodes. ______________________________________________________________________ ### get_suggested_nodes Available from n8n v2.12.0 Get curated node recommendations for workflow technique categories. Returns recommended nodes with pattern hints and configuration guidance. Use after analyzing what kind of workflow to build. #### Parameters | Name | Type | Required | Description | | ------------ | ---------- | ----------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `categories` | `string[]` | Yes (min 1) | Workflow technique categories. Available values: `chatbot`, `notification`, `scheduling`, `data_transformation`, `data_persistence`, `data_extraction`, `document_processing`, `form_input`, `content_generation`, `triage`, `find_research` | #### Output | Field | Type | Description | | ------------- | -------- | -------------------------------------------------------------------------- | | `suggestions` | `string` | Curated node recommendations with pattern hints and configuration guidance | ______________________________________________________________________ ### validate_workflow Available from n8n v2.12.0 Validate n8n Workflow SDK code. Parses the code into a workflow and checks for errors. Returns the workflow JSON if valid, or detailed error messages to fix. Always validate before creating a workflow. #### Parameters | Name | Type | Required | Description | | ------ | -------- | -------- | ------------------------------------------------------------------------------------------------------ | | `code` | `string` | Yes | Full TypeScript/JavaScript workflow code using the n8n Workflow SDK. Must include the workflow export. | #### Output | Field | Type | Description | | -------------------------- | ---------- | ------------------------------------------------ | | `valid` | `boolean` | Whether the workflow code is valid | | `nodeCount` | `number` | The number of nodes in the workflow (if valid) | | `warnings` | `array` | Validation warnings (if any) | | `warnings[].code` | `string` | The warning code identifying the type of warning | | `warnings[].message` | `string` | The warning message | | `warnings[].nodeName` | `string` | The node that triggered the warning | | `warnings[].parameterPath` | `string` | The parameter path that triggered the warning | | `errors` | `string[]` | Validation errors (if invalid) | #### Notes - Must be called before `create_workflow_from_code` or `update_workflow`. - Warnings may be present even when the code is valid. ______________________________________________________________________ ### create_workflow_from_code Available from n8n v2.12.0 Create a workflow in n8n from validated SDK code. Parses the code into a workflow and saves it. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------------------- | | `code` | `string` | Yes | Full TypeScript/JavaScript workflow code using the n8n Workflow SDK. Must be validated first with `validate_workflow`. | | `name` | `string` | No | Optional workflow name (max 128 chars). If not provided, uses the name from the code. | | `description` | `string` | No | Short workflow description (max 255 chars, 1-2 sentences). | | `projectId` | `string` | No | Project ID to create the workflow in. Defaults to the user's personal project. | | `folderId` | `string` | No | Folder ID to create the workflow in. Requires `projectId` to be set. | #### Output | Field | Type | Description | | ------------------------------------------ | -------- | ---------------------------------------------------------------------------------------------------------- | | `workflowId` | `string` | The ID of the created workflow | | `name` | `string` | The name of the created workflow | | `nodeCount` | `number` | The number of nodes in the workflow | | `url` | `string` | The URL to open the workflow in n8n | | `autoAssignedCredentials` | `array` | List of credentials that were automatically assigned to nodes | | `autoAssignedCredentials[].nodeName` | `string` | The name of the node that had credentials auto-assigned | | `autoAssignedCredentials[].credentialName` | `string` | The name of the credential that was auto-assigned | | `note` | `string` | Additional notes about the workflow creation (for example nodes skipped during credential auto-assignment) | #### Notes - Automatically assigns available credentials to nodes. - HTTP Request nodes are skipped during credential auto-assignment and must be configured manually. - Sets `availableInMCP` flag to true on the created workflow. - Marks the workflow with `aiBuilderAssisted` metadata. - Resolves webhook node IDs automatically. - `folderId` requires `projectId` to also be provided. - MCP clients should generate short descriptions for all new workflows. ______________________________________________________________________ ### update_workflow Available from n8n v2.12.0 Update an existing workflow in n8n from validated SDK code. Parses the code into a workflow and saves the changes. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to update | | `code` | `string` | Yes | Full TypeScript/JavaScript workflow code using the n8n Workflow SDK. Must be validated first with `validate_workflow`. | | `name` | `string` | No | Optional workflow name (max 128 chars). If not provided, uses the name from the code. | | `description` | `string` | No | Short workflow description (max 255 chars, 1-2 sentences). | #### Output | Field | Type | Description | | ------------------------------------------ | -------- | -------------------------------------------------------------------------------------------------------- | | `workflowId` | `string` | The ID of the updated workflow | | `name` | `string` | The name of the updated workflow | | `nodeCount` | `number` | The number of nodes in the workflow | | `url` | `string` | The URL to open the workflow in n8n | | `autoAssignedCredentials` | `array` | List of credentials that were automatically assigned to nodes | | `autoAssignedCredentials[].nodeName` | `string` | The name of the node that had credentials auto-assigned | | `autoAssignedCredentials[].credentialName` | `string` | The name of the credential that was auto-assigned | | `note` | `string` | Additional notes about the workflow update (for example nodes skipped during credential auto-assignment) | #### Notes - Preserves user-configured credentials from the existing workflow by matching nodes by name and type. - Marks the workflow with `aiBuilderAssisted` metadata. - MCP clients should (re)generate short descriptions for all modified workflows. ______________________________________________________________________ ### archive_workflow Available from n8n v2.12.0 Archive a workflow in n8n by its ID. #### Parameters | Name | Type | Required | Description | | ------------ | -------- | -------- | --------------------------------- | | `workflowId` | `string` | Yes | The ID of the workflow to archive | #### Output | Field | Type | Description | | ------------ | --------- | --------------------------------- | | `archived` | `boolean` | Whether the workflow was archived | | `workflowId` | `string` | The ID of the archived workflow | | `name` | `string` | The name of the archived workflow | #### Notes - Idempotent - skips already-archived workflows. ______________________________________________________________________ ## Data tables ### search_data_tables Available from n8n v2.16.0 Search for data tables accessible to the current user. Use this to find a data table ID before modifying or adding data to it. #### Parameters | Name | Type | Required | Description | | ----------- | --------- | -------- | ----------------------------------------------------------- | | `query` | `string` | No | Filter data tables by name (case-insensitive partial match) | | `projectId` | `string` | No | Filter by project ID | | `limit` | `integer` | No | Limit the number of results (max 100) | #### Output | Field | Type | Description | | ------------------------ | --------- | ----------------------------------------------------------------------- | | `data` | `array` | List of data tables matching the query | | `data[].id` | `string` | Unique identifier of the data table | | `data[].name` | `string` | The name of the data table | | `data[].projectId` | `string` | The project this data table belongs to | | `data[].createdAt` | `string` | ISO timestamp when the data table was created | | `data[].updatedAt` | `string` | ISO timestamp when the data table was last updated | | `data[].columns` | `array` | The columns defined in this data table | | `data[].columns[].id` | `string` | Column unique identifier | | `data[].columns[].name` | `string` | Column name | | `data[].columns[].type` | `string` | Column data type. One of: `"string"`, `"number"`, `"boolean"`, `"date"` | | `data[].columns[].index` | `integer` | Column position in the table | | `count` | `integer` | Total number of matching data tables | #### Notes - Maximum result limit is 100. ______________________________________________________________________ ### create_data_table Available from n8n v2.16.0 Create a new data table with the specified columns. #### Parameters | Name | Type | Required | Description | | ---------------- | -------- | ----------- | ----------------------------------------------------------------------------------------------------- | | `projectId` | `string` | Yes | The project ID where the data table will be created | | `name` | `string` | Yes | The name of the data table (min 1, max 128 chars, must be unique within the project) | | `columns` | `array` | Yes (min 1) | The columns to create in the data table | | `columns[].name` | `string` | Yes | Column name. Must start with a letter, contain only letters, numbers, and underscores (max 63 chars). | | `columns[].type` | `string` | Yes | The data type of the column. One of: `"string"`, `"number"`, `"boolean"`, `"date"` | #### Output | Field | Type | Description | | ----------- | -------- | ----------------------------------------------- | | `id` | `string` | The unique identifier of the created data table | | `name` | `string` | The name of the created data table | | `projectId` | `string` | The project ID of the created data table | #### Notes - At least one column is required. - Table name must be unique within the project. - Column names must match the pattern: `^[a-zA-Z][a-zA-Z0-9_]*$` (max 63 chars). ______________________________________________________________________ ### add_data_table_column Available from n8n v2.16.0 Add a new column to an existing data table. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ----------------------------------------------------------------------------------------------------- | | `dataTableId` | `string` | Yes | The ID of the data table to add a column to | | `projectId` | `string` | Yes | The project ID the data table belongs to | | `name` | `string` | Yes | Column name. Must start with a letter, contain only letters, numbers, and underscores (max 63 chars). | | `type` | `string` | Yes | The data type of the new column. One of: `"string"`, `"number"`, `"boolean"`, `"date"` | #### Output | Field | Type | Description | | ------------- | --------- | ------------------------------- | | `success` | `boolean` | Whether the operation succeeded | | `message` | `string` | Description of the result | | `column` | `object` | The created column | | `column.id` | `string` | Column unique identifier | | `column.name` | `string` | Column name | | `column.type` | `string` | Column data type | #### Notes - Column names must match the pattern: `^[a-zA-Z][a-zA-Z0-9_]*$` (max 63 chars). - Column type is immutable (trough MCP) after creation. ______________________________________________________________________ ### rename_data_table_column Available from n8n v2.16.0 Rename a column in a data table. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ----------------------------------------------------- | | `dataTableId` | `string` | Yes | The ID of the data table containing the column | | `projectId` | `string` | Yes | The project ID the data table belongs to | | `columnId` | `string` | Yes | The ID of the column to rename | | `name` | `string` | Yes | The new column name. Must follow column naming rules. | #### Output | Field | Type | Description | | ------------- | --------- | ------------------------------- | | `success` | `boolean` | Whether the operation succeeded | | `message` | `string` | Description of the result | | `column` | `object` | The renamed column | | `column.id` | `string` | Column unique identifier | | `column.name` | `string` | New column name | | `column.type` | `string` | Column data type | #### Notes - New name must follow column naming rules: `^[a-zA-Z][a-zA-Z0-9_]*$` (max 63 chars). ______________________________________________________________________ ### delete_data_table_column Available from n8n v2.16.0 Delete a column from a data table. This permanently removes the column and all its data. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ---------------------------------------------- | | `dataTableId` | `string` | Yes | The ID of the data table containing the column | | `projectId` | `string` | Yes | The project ID the data table belongs to | | `columnId` | `string` | Yes | The ID of the column to delete | #### Output | Field | Type | Description | | --------- | --------- | ------------------------------- | | `success` | `boolean` | Whether the operation succeeded | | `message` | `string` | Description of the result | #### Notes - Deleting a column through MCP can't be undone. ______________________________________________________________________ ### rename_data_table Available from n8n v2.16.0 Rename an existing data table. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | -------- | ------------------------------------------------------ | | `dataTableId` | `string` | Yes | The ID of the data table to rename | | `projectId` | `string` | Yes | The project ID the data table belongs to | | `name` | `string` | Yes | The new name for the data table (min 1, max 128 chars) | #### Output | Field | Type | Description | | --------- | --------- | ------------------------------- | | `success` | `boolean` | Whether the operation succeeded | | `message` | `string` | Description of the result | #### Notes - Name must be unique within the project. ______________________________________________________________________ ### add_data_table_rows Available from n8n v2.16.0 Insert rows into an existing data table. Each row is an object mapping column names to values. #### Parameters | Name | Type | Required | Description | | ------------- | -------- | --------------------- | --------------------------------------------------------------------------------------------------------- | | `dataTableId` | `string` | Yes | The ID of the data table to insert rows into | | `projectId` | `string` | Yes | The project ID the data table belongs to | | `rows` | `array` | Yes (min 1, max 1000) | Array of row objects. Each object maps column names to values (`string`, `number`, `boolean`, or `null`). | #### Output | Field | Type | Description | | --------------- | --------- | -------------------------------------- | | `success` | `boolean` | Whether the insert operation succeeded | | `insertedCount` | `integer` | Number of rows successfully inserted | #### Notes - Maximum 1000 rows per call. - Row values must be `string`, `number`, `boolean`, or `null`. - Column names in row objects must match existing column names in the data table. # n8n public REST API Feature availability The n8n API isn't available during the free trial. Please upgrade to access this feature. Using n8n's public [API](../glossary/#api), you can programmatically perform many of the same tasks as you can in the GUI. This section introduces n8n's REST API, including: - How to [authenticate](authentication/) - [Paginating](pagination/) results - Using the [built-in API playground](using-api-playground/) (self-hosted n8n only) - [Endpoint reference](api-reference/) n8n provides an [n8n API node](../integrations/builtin/core-nodes/n8n-nodes-base.n8n/) to access the API in your workflows. ## Choose your interaction method ### REST API (This section) Interact with n8n directly using HTTP requests. Ideal for: - Custom integrations and applications - Language-agnostic HTTP calls - Direct REST API usage in workflows ### n8n CLI (Recommended for developers) Use [n8n CLI](n8n-cli/) for a command-line experience. It wraps the public API and is optimized for: - Command-line automation and scripting - CI/CD pipeline integration - AI agent integration (Claude Code, Cursor, etc.) - Developers who prefer CLI tools ## Learn about REST APIs The API documentation assumes you are familiar with REST APIs. If you're not, these resources may be helpful: - [KnowledgeOwl's guide to working with APIs](https://support.knowledgeowl.com/help/working-with-apis): a basic introduction, including examples of how to call REST APIs. - [IBM Cloud Learn Hub - What is an Application Programming Interface (API)](https://www.ibm.com/cloud/learn/api): this gives a general, but technical, introduction to APIs. - [IBM Cloud Learn Hub - What is a REST API?](https://www.ibm.com/cloud/learn/rest-apis): more detailed information about REST APIs. - [MDN web docs - An overview of HTTP](https://developer.mozilla.org/en-US/docs/Web/HTTP/Overview): REST APIs work over HTTP and use HTTP verbs, or methods, to specify the action to perform. Use the API playground Trying out the API in the [playground](using-api-playground/) can help you understand how APIs work. If you're worried about changing live data, consider setting up a test workflow, or test n8n instance, to explore safely. # API authentication n8n uses API keys to authenticate API calls. Feature availability The n8n API isn't available during the free trial. Please upgrade to access this feature. ## API Scopes Users of [enterprise instances](https://n8n.io/enterprise/) can limit which resources and actions a key can access with scopes. API key scopes allow you specify the exact level of access a key needs for its intended purpose. Non-enterprise API keys have full access to all the account's resources and capabilities. ## Create an API key 1. Log in to n8n. 1. Go to **Settings** > **n8n API**. 1. Select **Create an API key**. 1. Choose a **Label** and set an **Expiration** time for the key. 1. If on an enterprise plan, choose the **Scopes** to give the key. 1. Copy **My API Key** and use this key to authenticate your calls. ## Call the API using your key Send the API key in your API call as a header named `X-N8N-API-KEY`. For example, say you want to get all active workflows. Your curl request will look like this: ``` # For a self-hosted n8n instance curl -X 'GET' \ '://api/v/workflows?active=true' \ -H 'accept: application/json' \ -H 'X-N8N-API-KEY: ' # For n8n Cloud curl -X 'GET' \ '/api/v/workflows?active=true' \ -H 'accept: application/json' \ -H 'X-N8N-API-KEY: ' ``` ## Delete an API key 1. Log in to n8n. 1. Go to **Settings** > **n8n API**. 1. Select **Delete** next to the key you want to delete. 1. Confirm the delete by selecting **Delete Forever**. # API pagination The default page size is 100 results. You can change the page size limit. The maximum permitted size is 250. When a response contains more than one page, it includes a cursor, which you can use to request the next pages. For example, say you want to get all active workflows, 150 at a time. Get the first page: ``` # For a self-hosted n8n instance curl -X 'GET' \ '://api/v/workflows?active=true&limit=150' \ -H 'accept: application/json' \ -H 'X-N8N-API-KEY: ' # For n8n Cloud curl -X 'GET' \ '/api/v/workflows?active=true&limit=150' \ -H 'accept: application/json' \ -H 'X-N8N-API-KEY: ' ``` The response is in JSON format, and includes a `nextCursor` value. This is an example response. ``` { "data": [ // The response contains an object for each workflow { // Workflow data } ], "nextCursor": "MTIzZTQ1NjctZTg5Yi0xMmQzLWE0NTYtNDI2NjE0MTc0MDA" } ``` Then to request the next page: ``` # For a self-hosted n8n instance curl -X 'GET' \ '://api/v/workflows?active=true&limit=150&cursor=MTIzZTQ1NjctZTg5Yi0xMmQzLWE0NTYtNDI2NjE0MTc0MDA' \ -H 'accept: application/json' # For n8n Cloud curl -X 'GET' \ '/api/v/workflows?active=true&limit=150&cursor=MTIzZTQ1NjctZTg5Yi0xMmQzLWE0NTYtNDI2NjE0MTc0MDA' \ -H 'accept: application/json' ``` # Using an API playground This documentation site provides a playground to test out calls. Self-hosted users also have access to a built-in playground hosted as part of their instance. ## Documentation playground You can test API calls from this site's [API reference](../api-reference/). You need to set your server's base URL and instance name, and add an API key. n8n uses [Scalar's](https://github.com/scalar/scalar) open source API platform to power this functionality. Exposed API key and data Use a test API key with limited scopes and test data when using a playground. All calls from the playground are routed through Scalar's proxy servers. Real data You have access to your live data. This is useful for trying out requests. Be aware you can change or delete real data. ## Built-in playground Feature availability The API playground isn't available on Cloud. It's available for all self-hosted pricing tiers. The n8n API comes with a built-in Swagger UI playground in self-hosted versions. This provides interactive documentation, where you can try out requests. The path to access the playground depends on your hosting. n8n constructs the path from values set in your environment variables: ``` N8N_HOST:N8N_PORT/N8N_PATH/api/v/docs ``` The API version number is `1`. There may be multiple versions available in the future. Real data If you select **Authorize** and enter your API key in the API playground, you have access to your live data. This is useful for trying out requests. Be aware you can change or delete real data. The API includes built-in documentation about credential formats. This is available using the `credentials` endpoint: ``` N8N_HOST:N8N_PORT/N8N_PATH/api/v/credentials/schema/{credentialTypeName} ``` How to find `credentialTypeName` To find the type, download your workflow as JSON and examine it. For example, for a Google Drive node the `{credentialTypeName}` is `googleDriveOAuth2Api`: ``` { ..., "credentials": { "googleDriveOAuth2Api": { "id": "9", "name": "Google Drive" } } } ``` # Getting started with n8n CLI **n8n CLI** is a lightweight command-line client that communicates with a running n8n instance through the [n8n API](../). It can run from any machine with network access and authenticates using an API key. n8n CLI is in beta Use it only for experimenting, local development, and personal projects and not for production workflows. Use the API CLI to: - List and inspect workflows - Create a workflow from JSON - Check recent executions - Create a credential - Manage projects All operations respect the permissions of the user and the scope of the API key. ## n8n CLI versus server CLI If you need to manage your n8n instance (backups, license management, emergency resets), see the [Server CLI](../../hosting/cli-commands/), a built-in tool that runs on the same machine as n8n. | Aspect | n8n CLI | Server CLI | | ------------------------ | -------------------------------------- | ---------------------------------------- | | **Runs from** | Any machine with network access | Same machine as n8n | | **Authentication** | API key | Direct database access | | **Requires n8n running** | Yes | No (not required for most operations) | | **Best for** | Developers, integrations, AI agents | Instance operators, backups, emergencies | | **Permissions** | Respects user roles and API key scopes | Bypasses access control | ## Install n8n-cli ``` # Use directly with npx (zero install) npx @n8n/cli workflow list # Or install globally npm install -g @n8n/cli ``` ## Connect to your instance ``` n8n-cli config set-url https://your-instance.n8n.cloud n8n-cli config set-api-key YOUR_API_KEY n8n-cli config show ``` - The configuration is saved to `~/.n8n-cli/config.json` with restricted file permissions (`0600`). - Get your API key from **n8n > Settings > n8n API** Alternatively, skip the configuration file and use environment variables: ``` export N8N_URL=https://your-instance.n8n.cloud export N8N_API_KEY=your_api_key ``` ## Inline flags ``` n8n-cli --url=https://my-n8n.app.n8n.cloud --api-key=n8n_api_xxxxx workflow list ``` ### Resolution order 1. Command-line flags (`--url`, `--api-key`) 1. Environment variables (`N8N_URL`, `N8N_API_KEY`) 1. Config file (`~/.n8n-cli/config.json`) ## Commands Every command supports `--help` for detailed usage. | Topic | Commands | | ------------------ | -------------------------------------------------------------------------------------------------- | | `workflow` | `list`, `get`, `create`, `update`, `delete`, `activate`, `deactivate`, `tags`, `transfer` | | `execution` | `list`, `get`, `retry`, `stop`, `delete` | | `credential` | `list`, `get`, `schema`, `create`, `delete`, `transfer` | | `project` | `list`, `get`, `create`, `update`, `delete`, `members`, `add-member`, `remove-member` | | `tag` | `list`, `create`, `update`, `delete` | | `variable` | `list`, `create`, `update`, `delete` | | `data-table` | `list`, `get`, `create`, `delete`, `rows`, `add-rows`, `update-rows`, `upsert-rows`, `delete-rows` | | `user` | `list`, `get` | | `config` | `set-url`, `set-api-key`, `show` | | `source-control` | `pull` | | `skill` | `install` | | `audit` | (top-level) | | `login` / `logout` | (top-level) | ## Output formats All commands support three output formats via `--format`: | Format | Flag | Use when | | ------- | -------------------------- | --------------------------------------- | | Table | -`-format=table` (default) | You want human-readable terminal output | | JSON | `--format=json` | Piping to jq, programmatic use | | ID-only | `--format=id-only` | Piping to xargs, scripting | ### Examples - Human-readable table ``` n8n-cli workflow list ``` - JSON for scripts ``` n8n-cli workflow list --format=json | jq '.[] | select(.active) | .id' ``` - Pipe IDs into another command ``` n8n-cli workflow list --format=id-only | xargs -I{} n8n-cli workflow deactivate {} ``` ## Use as skill with Claude Code Install the skill so Claude always knows how to use n8n-cli: ``` n8n-cli skill install --global ``` Then in Claude Code, type `/n8n-cli` to load it. Claude can now create, update, and manage workflows on your behalf without requiring an MCP. ## Examples ### List and inspect workflows ``` n8n-cli workflow list n8n-cli workflow get ``` ### Create a workflow from JSON ``` cat workflow.json | n8n-cli workflow create --stdin ``` ### Check recent executions ``` n8n-cli execution list --status=error --limit=10 ``` ### Create a credential ``` n8n-cli credential schema gmailOAuth2 # see required fields first n8n-cli credential create --type=gmailOAuth2 --name='My Gmail' --file=cred.json ``` ### Manage projects ``` n8n-cli project create --name="My Project" n8n-cli workflow transfer --project= ``` # Code in n8n n8n is a low-code tool. This means you can do a lot without code, then add code when needed. ## Code in your workflows There are two places in your workflows where you can use code: - **Expressions** Use [expressions](../glossary/#expression-n8n) to transform [data](../data/) in your nodes. You can use JavaScript in expressions, as well as n8n's [Built-in methods and variables](builtin/overview/). [Expressions](../data/expressions/) - **Code node** Use the Code node to add JavaScript or Python to your workflow. [Code node](code-node/) ## Other technical resources These are features that are relevant to technical users. ### Technical nodes n8n provides core nodes, which simplify adding key functionality such as API requests, webhooks, scheduling, and file handling. - **Write a backend** The [HTTP Request](../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/), [Webhook](../integrations/builtin/core-nodes/n8n-nodes-base.webhook/), and [Code](code-node/) nodes help you make API calls, respond to webhooks, and write any JavaScript in your workflow. Use this do things like [Create an API endpoint](https://n8n.io/workflows/1750-creating-an-api-endpoint/). [Core nodes](../integrations/builtin/core-nodes/) - **Represent complex logic** You can build complex flows, using nodes like [If](../integrations/builtin/core-nodes/n8n-nodes-base.if/), [Switch](../integrations/builtin/core-nodes/n8n-nodes-base.switch/), and [Merge](../integrations/builtin/core-nodes/n8n-nodes-base.merge/) nodes. [Flow logic](../flow-logic/) ### Other developer resources - **The n8n API** n8n provides an API, where you can programmatically perform many of the same tasks as you can in the GUI. There's an [n8n API node](../integrations/builtin/core-nodes/n8n-nodes-base.n8n/) to access the API in your workflows. [API](../api/) - **Self-host** You can self-host n8n. This keeps your data on your own infrastructure. [Hosting](../hosting/) - **Build your own nodes** You can build custom nodes, install them on your n8n instance, and publish them to [npm](https://www.npmjs.com/). [Creating nodes](../integrations/creating-nodes/overview/) # AI coding with GPT Not available on self-hosted. Python isn't supported. /// ## Use AI in the Code node Feature availability AI assistance in the Code node is available to Cloud users. It isn't available in self-hosted n8n. AI generated code overwrites your code If you've already written some code on the **Code** tab, the AI generated code will replace it. n8n recommends using AI as a starting point to create your initial code, then editing it as needed. To use ChatGPT to generate code in the Code node: 1. In the Code node, set **Language** to **JavaScript**. 1. Select the **Ask AI** tab. 1. Write your query. 1. Select **Generate Code**. n8n sends your query to ChatGPT, then displays the result in the **Code** tab. ## Usage limits During the trial phase there are no usage limits. If n8n makes the feature permanent, there may be usage limits as part of your pricing tier. ## Feature limits The ChatGPT implementation in n8n has the following limitations: - The AI writes code that manipulates data from the n8n workflow. You can't ask it to pull in data from other sources. - The AI doesn't know your data, just the schema, so you need to tell it things like how to find the data you want to extract, or how to check for null. - Nodes before the Code node must execute and deliver data to the Code node before you run your AI query. - Doesn't work with large incoming data schemas. - May have issues if there are a lot of nodes before the code node. ## Writing good prompts Writing good prompts increases the chance of getting useful code back. Some general tips: - Provide examples: if possible, give a sample expected output. This helps the AI to better understand the transformation or logic you’re aiming for. - Describe the processing steps: if there are specific processing steps or logic that should apply to the data, list them in sequence. For example: "First, filter out all users under 18. Then, sort the remaining users by their last name." - Avoid ambiguities: while the AI understands various instructions, being clear and direct ensures you get the most accurate code. Instead of saying "Get the older users," you might say "Filter users who are 60 years and above." - Be clear about what you expect as the output. Do you want the data transformed, filtered, aggregated, or sorted? Provide as much detail as possible. And some n8n-specific guidance: - Think about the input data: make sure ChatGPT knows which pieces of the data you want to access, and what the incoming data represents. You may need to tell ChatGPT about the availability of n8n's built-in methods and variables. - Declare interactions between nodes: if your logic involves data from multiple nodes, specify how they should interact. "Merge the output of 'Node A' with 'Node B' based on the 'userID' property". if you prefer data to come from certain nodes or to ignore others, be clear: "Only consider data from the 'Purchases' node and ignore the 'Refunds' node." - Ensure the output is compatible with n8n. Refer to [Data structure](../../data/data-structure/) for more information on the data structure n8n requires. ### Example prompts These examples show a range of possible prompts and tasks. #### Example 1: Find a piece of data inside a second dataset To try the example yourself, [download the example workflow](../../_workflows/ai-code/find-a-piece-of-data.json) and import it into n8n. In the third Code node, enter this prompt: > The slack data contains only one item. The input data represents all Notion users. Sometimes the person property that holds the email can be null. I want to find the notionId of the Slack user and return it. Take a look at the code the AI generates. This is the JavaScript you need: ``` const slackUser = $("Mock Slack").all()[0]; const notionUsers = $input.all(); const slackUserEmail = slackUser.json.email; const notionUser = notionUsers.find( (user) => user.json.person && user.json.person.email === slackUserEmail ); return notionUser ? [{ json: { notionId: notionUser.json.id } }] : []; ``` #### Example 2: Data transformation To try the example yourself, [download the example workflow](../../_workflows/ai-code/data-transformation.json) and import it into n8n. In the **Join items** Code node, enter this prompt: > Return a single line of text that has all usernames listed with a comma. Each username should be enquoted with a double quotation mark. Take a look at the code the AI generates. This is the JavaScript you need: ``` const items = $input.all(); const usernames = items.map((item) => `"${item.json.username}"`); const result = usernames.join(", "); return [{ json: { usernames: result } }]; ``` #### Example 3: Summarize data and create a Slack message To try the example yourself, [download the example workflow](../../_workflows/ai-code/summarize-data.json) and import it into n8n. In the **Summarize** Code node, enter this prompt: > Create a markdown text for Slack that counts how many ideas, features and bugs have been submitted. The type of submission is saved in the property_type field. A feature has the property "Feature", a bug has the property "Bug" and an idea has the property "Bug". Also, list the five top submissions by vote in that message. Use "" as markdown for links. Take a look at the code the AI generates. This is the JavaScript you need: ``` const submissions = $input.all(); // Count the number of ideas, features, and bugs let ideaCount = 0; let featureCount = 0; let bugCount = 0; submissions.forEach((submission) => { switch (submission.json.property_type[0]) { case "Idea": ideaCount++; break; case "Feature": featureCount++; break; case "Bug": bugCount++; break; } }); // Sort submissions by votes and take the top 5 const topSubmissions = submissions .sort((a, b) => b.json.property_votes - a.json.property_votes) .slice(0, 5); let topSubmissionText = ""; topSubmissions.forEach((submission) => { topSubmissionText += `<${submission.json.url}|${submission.json.name}> with ${submission.json.property_votes} votes\n`; }); // Construct the Slack message const slackMessage = `*Summary of Submissions*\n Ideas: ${ideaCount}\n Features: ${featureCount}\n Bugs: ${bugCount}\n Top 5 Submissions:\n ${topSubmissionText}`; return [{ json: { slackMessage } }]; ``` ### Reference incoming node data explicitly If your incoming data contains nested fields, using dot notation to reference them can help the AI understand what data you want. To try the example yourself, [download the example workflow](../../_workflows/ai-code/reference-incoming-data-explicitly.json) and import it into n8n. In the second Code node, enter this prompt: > The data in "Mock data" represents a list of people. For each person, return a new item containing personal_info.first_name and work_info.job_title. This is the JavaScript you need: ``` const items = $input.all(); const newItems = items.map((item) => { const firstName = item.json.personal_info.first_name; const jobTitle = item.json.work_info.job_title; return { json: { firstName, jobTitle, }, }; }); return newItems; ``` ### Related resources Pluralsight offer a short guide on [How to use ChatGPT to write code](https://www.pluralsight.com/blog/software-development/how-use-chatgpt-programming-coding), which includes example prompts. ## Fixing the code The AI-generated code may work without any changes, but you may have to edit it. You need to be aware of n8n's [Data structure](../../data/data-structure/). You may also find n8n's built-in methods and variables useful. # Using the Code node Use the Code node to write custom JavaScript or Python and run it as a step in your workflow. Coding in n8n This page gives usage information about the Code node. For more guidance on coding in n8n, refer to the [Code](../) section. It includes: - Reference documentation on [Built-in methods and variables](../builtin/overview/) - Guidance on [Handling dates](../../data/specific-data-types/luxon/) and [Querying JSON](../../data/specific-data-types/jmespath/) - A growing collection of examples in the [Cookbook](../cookbook/code-node/) Examples and templates For usage examples and templates to help you get started, refer to n8n's [Code integrations](https://n8n.io/integrations/code/) page. Function and Function Item nodes The Code node replaces the Function and Function Item nodes from version 0.198.0. If you're using an older version of n8n, you can still view the [Function node documentation](https://github.com/n8n-io/n8n-docs/blob/67935ad2528e2e30d7984ea917e4af2910a096ec/docs/integrations/builtin/core-nodes/n8n-nodes-base.function.md) and [Function Item node documentation](https://github.com/n8n-io/n8n-docs/blob/67935ad2528e2e30d7984ea917e4af2910a096ec/docs/integrations/builtin/core-nodes/n8n-nodes-base.functionItem.md). ## Usage How to use the Code node. ### Choose a mode There are two modes: - **Run Once for All Items**: this is the default. When your workflow runs, the code in the code node executes once, regardless of how many input items there are. - **Run Once for Each Item**: choose this if you want your code to run for every input item. ## JavaScript The Code node supports Node.js. ### Supported JavaScript features The Code node supports: - Promises. Instead of returning the items directly, you can return a promise which resolves accordingly. - Writing to your browser console using `console.log`. This is useful for debugging and troubleshooting your workflows. ### External libraries If you self-host n8n, you can import and use built-in and external npm modules in the Code node. To learn how to enable external modules, refer to the [Enable modules in Code node](../../hosting/configuration/configuration-examples/modules-in-code-node/) guide. If you use n8n Cloud, you can't import external npm modules. n8n makes two modules available for you: - [crypto Node.js module](https://nodejs.org/docs/latest-v18.x/api/crypto.html) - [moment npm package](https://www.npmjs.com/package/moment) ### Built-in methods and variables n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to [Built-in methods and variables](../builtin/overview/) for more information. The syntax to use the built-in methods and variables is `$variableName` or `$methodName()`. Type `$` in the Code node or expressions editor to see a list of suggested methods and variables. ### Keyboard shortcuts The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of [keyboard shortcuts](../../integrations/builtin/core-nodes/n8n-nodes-base.code/keyboard-shortcuts/). ## Python (Pyodide - legacy) Pyodide is a legacy feature. n8n v2 no longer supports this feature. n8n added Python support in version 1.0. It doesn't include a Python executable. Instead, n8n provides Python support using [Pyodide](https://pyodide.org/en/stable/), which is a port of CPython to WebAssembly. This limits the available Python packages to the [Packages included with Pyodide](https://pyodide.org/en/stable/usage/packages-in-pyodide.html#packages-in-pyodide). n8n downloads the package automatically the first time you use it. Slower than JavaScript The Code node takes longer to process Python than JavaScript. This is due to the extra compilation steps. ### Built-in methods and variables n8n provides built-in methods and variables for working with data and accessing n8n data. Refer to [Built-in methods and variables](../builtin/overview/) for more information. The syntax to use the built-in methods and variables is `_variableName` or `_methodName()`. Type `_` in the Code node to see a list of suggested methods and variables. ### Keyboard shortcuts The Code node editing environment supports time-saving and useful keyboard shortcuts for a range of operations from autocompletion to code-folding and using multiple-cursors. See the full list of [keyboard shortcuts](../../integrations/builtin/core-nodes/n8n-nodes-base.code/keyboard-shortcuts/). ## File system and HTTP requests You can't access the file system or make HTTP requests. Use the following nodes instead: - [Read/Write File From Disk](../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/) - [HTTP Request](../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) ## Python (Native) n8n added native Python support using task runners in version 1.111.0. This feature is stable as of n8n v2. Main differences from Pyodide: - Native Python supports only `_items` in all-items mode and `_item` in per-item mode. It doesn't support other n8n built-in methods and variables. - On self-hosted, native Python supports importing native Python modules from the standard library and from third-parties, if the `n8nio/runners` image includes them and explicitly allowlists them. See [adding extra dependencies for task runners](../../hosting/configuration/task-runners/#adding-extra-dependencies) for more details. - Native Python denies insecure built-ins by default. See [task runners environment variables](../../hosting/configuration/environment-variables/task-runners/) for more details. - Unlike Pyodide, which accepts dot access notation, for example, `item.json.myNewField`, native Python only accepts bracket access notation, for example, `item["json"]["my_new_field"]`. There may be other minor syntax differences where Pyodide accepts constructs that aren't legal in native Python. - On n8n cloud, the Python option for the Code node doesn't allow users to import any Python libraries — whether from the standard library or third-party packages. Self-hosting users can find setup instructions to include external libraries [here](https://docs.n8n.io/hosting/configuration/task-runners/#adding-extra-dependencies). In the long term, the n8n team is committed to allowing users to securely execute arbitrary Python code with any first- and third-party libraries using task runners. Upgrading to native Python is a breaking change, so you may need to adjust your Python scripts to use the native Python runner. ## Coding in n8n There are two places where you can use code in n8n: the Code node and the expressions editor. When using either area, there are some key concepts you need to know, as well as some built-in methods and variables to help with common tasks. ### Key concepts When working with the Code node, you need to understand the following concepts: - [Data structure](../../data/data-structure/): understand the data you receive in the Code node, and requirements for outputting data from the node. - [Item linking](../../data/data-mapping/data-item-linking/): learn how data items work, and how to link to items from previous nodes. You need to handle item linking in your code when the number of input and output items doesn't match. ### Built-in methods and variables n8n includes built-in methods and variables. These provide support for: - Accessing specific item data - Accessing data about workflows, executions, and your n8n environment - Convenience variables to help with data and time Refer to [Built-in methods and variables](../builtin/overview/) for more information. ## Use AI in the Code node Feature availability AI assistance in the Code node is available to Cloud users. It isn't available in self-hosted n8n. AI generated code overwrites your code If you've already written some code on the **Code** tab, the AI generated code will replace it. n8n recommends using AI as a starting point to create your initial code, then editing it as needed. To use ChatGPT to generate code in the Code node: 1. In the Code node, set **Language** to **JavaScript**. 1. Select the **Ask AI** tab. 1. Write your query. 1. Select **Generate Code**. n8n sends your query to ChatGPT, then displays the result in the **Code** tab. # Custom variables Feature availability - Available on Self-hosted Enterprise and Pro Cloud plans. - Only instance owners and admins can create variables. Custom variables are read-only variables that you can use to store and reuse values in n8n workflows. Variable scope and availability - **Global variables** are available to everyone on your n8n instance, across all projects. - **Project-scoped variables** are available only within the specific project they're created in. - Project-scoped variables are available in 1.118.0 and above. Previous versions only support global variables accessible from the left side menu. ## Create variables You can access the **Variables** tab from either the overview page or a specific project. To create a new variable: 1. On the **Variables** tab, select **Add Variable**. 1. Enter a **Key** and **Value**. The maximum key length is 50 characters, and the maximum value length is 1000 characters. n8n limits the characters you can use in the key and value to lowercase and uppercase letters, numbers, and underscores (`A-Z`, `a-z`, `0-9`, `_`). 1. Select the **Scope** (only available when creating from the overview page): - **Global**: The variable is available across all projects in the n8n instance. - **Project**: The variable is available only within a specific project (you can select which project). - When creating from a project page, the scope is automatically set to that project. 1. Select **Save**. The variable is now available for use in workflows according to its scope. ## Edit and delete variables To edit or delete a variable: 1. On the **Variables** tab, hover over the variable you want to change. 1. Select **Edit** or **Delete**. ## Use variables in workflows You can access variables in the Code node and in [expressions](../../glossary/#expression-n8n): ``` // Access a variable $vars. ``` All variables are strings. During workflow execution, n8n replaces the variables with the variable value. If the variable has no value, n8n treats its value as `undefined`. Workflows don't automatically fail in this case. Variable precedence When a project-scoped variable has the same key as a global variable, the project-scoped variable value takes precedence and overrides the global variable value within that project's workflows. Variables are read-only. You must use the UI to change the values. If you need to set and access custom data within your workflow, use [Workflow static data](../cookbook/builtin/get-workflow-static-data/). # HTTP node variables Variables for working with HTTP node requests and responses when using pagination. Refer to [HTTP Request](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) for guidance on using the HTTP node, including configuring pagination. Refer to [HTTP Request node cookbook | Pagination](../../cookbook/http-node/pagination/) for example pagination configurations. HTTP node only These variables are for use in expressions in the HTTP node. You can't use them in other nodes. | Variable | Description | | ------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `$pageCount` | The pagination count. Tracks how many pages the node has fetched. | | `$request` | The request object sent by the HTTP node. | | `$response` | The response object from the HTTP call. Includes `$response.body`, `$response.headers`, and `$response.statusCode`. The contents of `body` and `headers` depend on the data sent by the API. | # JMESPath method This is an n8n-provided method for working with the [JMESPath](../../../data/specific-data-types/jmespath/) library. Python support You can use Python in the Code node. It isn't available in expressions. | Method | Description | Available in Code node? | | ------------- | ------------------------------------------------- | ----------------------- | | `$jmespath()` | Perform a search on a JSON object using JMESPath. | | | Method | Description | | ------------- | ------------------------------------------------- | | `_jmespath()` | Perform a search on a JSON object using JMESPath. | # LangChain Code node methods n8n provides these methods to make it easier to perform common tasks in the [LangChain Code node](../../../integrations/builtin/cluster-nodes/root-nodes/n8n-nodes-langchain.code/). LangChain Code node only These variables are for use in expressions in the LangChain Code node. You can't use them in other nodes. | Method | Description | | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `this.addInputData(inputName, data)` | Populate the data of a specified non-main input. Useful for mocking data. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../data/data-structure/) for information on the data structure expected by n8n. | | `this.addOutputData(outputName, data)` | Populate the data of a specified non-main output. Useful for mocking data. - `outputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../data/data-structure/) for information on the data structure expected by n8n. | | `this.getInputConnectionData(inputName, itemIndex, inputIndex?)` | Get data from a specified non-main input. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `itemIndex` should always be `0` (this parameter will be used in upcoming functionality) - Use `inputIndex` if there is more than one node connected to the specified input. | | `this.getInputData(inputIndex?, inputName?)` | Get data from the main input. | | `this.getNode()` | Get the current node. | | `this.getNodeOutputs()` | Get the outputs of the current node. | | `this.getExecutionCancelSignal()` | Use this to stop the execution of a function when the workflow stops. In most cases n8n handles this, but you may need to use it if building your own chains or agents. It replaces the [Cancelling a running LLMChain](https://js.langchain.com/docs/modules/chains/foundational/llm_chain#cancelling-a-running-llmchain) code that you'd use if building a LangChain application normally. | # n8n metadata Methods for working with n8n metadata. This includes: - Access to n8n environment variables for self-hosted n8n. - Metadata about workflows, executions, and nodes. - Information about instance [Variables](../../variables/) and [External secrets](../../../external-secrets/). Python support You can use Python in the Code node. It isn't available in expressions. | Method | Description | Available in Code node? | | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | | `$env` | Contains n8n instance configuration [environment variables](../../../hosting/configuration/environment-variables/). | | | `$execution.customData` | Set and get custom execution data. Refer to [Custom executions data](../../../workflows/executions/custom-executions-data/) for more information. | | | `$execution.id` | The unique ID of the current workflow execution. | | | `$execution.mode` | Whether the execution was triggered automatically, or by manually running the workflow. Possible values are `test` and `production`. | | | `$execution.resumeUrl` | The webhook URL to call to resume a workflow waiting at a [Wait node](../../../integrations/builtin/core-nodes/n8n-nodes-base.wait/). | | | `$getWorkflowStaticData(type)` | View an [example](../../cookbook/builtin/get-workflow-static-data/). Static data doesn't persist when testing workflows. The workflow must be active and called by a trigger or webhook to save static data. This gives access to the static workflow data. | | | `$("").isExecuted` | Check whether a node has already executed. | | | `$itemIndex` | The index of an item in a list of items. | | | `$nodeVersion` | Get the version of the current node. | | | `$prevNode.name` | The name of the node that the current input came from. When using the Merge node, note that `$prevNode` always uses the first input connector. | | | `$prevNode.outputIndex` | The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an If or Switch node). When using the Merge node, note that `$prevNode` always uses the first input connector. | | | `$prevNode.runIndex` | The run of the previous node that generated the current input. When using the Merge node, note that `$prevNode` always uses the first input connector. | | | `$runIndex` | How many times n8n has executed the current node. Zero-based (the first run is 0, the second is 1, and so on). | | | `$secrets` | Contains information about your [External secrets](../../../external-secrets/) setup. | | | `$vars` | Contains the [Variables](../../variables/) available in the active environment. | | | `$version` | The node version. | | | `$workflow.active` | Whether the workflow is active (true) or not (false). | | | `$workflow.id` | The workflow ID. | | | `$workflow.name` | The workflow name. | | | Method | Description | | -------- | --------------------------------------------------------------------- | | `_items` | Contains incoming items in "Run once for all items" mode. | | `_item` | Contains the item being iterated on in "Run once for each item" mode. | | Method | Description | | ------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `_env` | Contains n8n instance configuration [environment variables](../../../hosting/configuration/environment-variables/). | | `_execution.customData` | Set and get custom execution data. Refer to [Custom executions data](../../../workflows/executions/custom-executions-data/) for more information. | | `_execution.id` | The unique ID of the current workflow execution. | | `_execution.mode` | Whether the execution was triggered automatically, or by manually running the workflow. Possible values are `test` and `production`. | | `_execution.resumeUrl` | The webhook URL to call to resume a workflow waiting at a [Wait node](../../../integrations/builtin/core-nodes/n8n-nodes-base.wait/). | | `_getWorkflowStaticData(type)` | View an [example](../../cookbook/builtin/get-workflow-static-data/). Static data doesn't persist when testing workflows. The workflow must be active and called by a trigger or webhook to save static data. This gives access to the static workflow data. | | `_("").isExecuted` | Check whether a node has already executed. | | `_nodeVersion` | Get the version of the current node. | | `_prevNode.name` | The name of the node that the current input came from. When using the Merge node, note that `_prevNode` always uses the first input connector. | | `_prevNode.outputIndex` | The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an If or Switch node). When using the Merge node, note that `_prevNode` always uses the first input connector. | | `_prevNode.runIndex` | The run of the previous node that generated the current input. When using the Merge node, note that `_prevNode` always uses the first input connector. | | `_runIndex` | How many times n8n has executed the current node. Zero-based (the first run is 0, the second is 1, and so on). | | `_secrets` | Contains information about your [External secrets](../../../external-secrets/) setup. | | `_vars` | Contains the [Variables](../../variables/) available in the active environment. | | `_workflow.active` | Whether the workflow is active (true) or not (false). | | `_workflow.id` | The workflow ID. | | `_workflow.name` | The workflow name. | # Built-in methods and variables n8n provides built-in methods and variables for working with data and accessing n8n data. This section provides a reference of available methods and variables for use in [expressions](../../../glossary/#expression-n8n), with a short description. Availability in the expressions editor and the Code node Some methods and variables aren't available in the Code node. These aren't in the documentation. All data transformation functions are only available in the expressions editor. The [Cookbook](../../) contains examples for some common tasks, including some [Code node only](../../cookbook/code-node/) functions. - [JMESPath](/code/builtin/jmespath/) - [HTTP node](/code/builtin/http-node-variables/) - [LangChain Code node](/code/builtin/langchain-methods/) - [n8n metadata](/code/builtin/n8n-metadata/) # Examples using n8n's built-in methods and variables n8n provides built-in methods and variables for working with data and accessing n8n data. This section provides usage examples. - [execution](/code/cookbook/builtin/execution/) - [getWorkflowStaticData](/code/cookbook/builtin/get-workflow-static-data/) - [(node-name).all](/code/cookbook/builtin/all/) - [vars](/code/cookbook/builtin/vars/) ## Related resources - [Built-in methods and variables reference](../../builtin/overview/) - [Expressions](../../../data/expressions/) - [Code node](../../code-node/) # `("").all(branchIndex?: number, runIndex?: number)` This gives access to all the items of the current or parent nodes. If you don't supply any parameters, it returns all the items of the current node. ## Getting items ``` // Returns all the items of the given node and current run let allItems = $("").all(); // Returns all items the node "IF" outputs (index: 0 which is Output "true" of its most recent run) let allItems = $("IF").all(); // Returns all items the node "IF" outputs (index: 0 which is Output "true" of the same run as current node) let allItems = $("IF").all(0, $runIndex); // Returns all items the node "IF" outputs (index: 1 which is Output "false" of run 0 which is the first run) let allItems = $("IF").all(1, 0); ``` ``` # Returns all the items of the given node and current run allItems = _("").all(); # Returns all items the node "IF" outputs (index: 0 which is Output "true" of its most recent run) allItems = _("IF").all(); # Returns all items the node "IF" outputs (index: 0 which is Output "true" of the same run as current node) allItems = _("IF").all(0, _runIndex); # Returns all items the node "IF" outputs (index: 1 which is Output "false" of run 0 which is the first run) allItems = _("IF").all(1, 0); ``` ## Accessing item data Get all items output by a previous node, and log out the data they contain: ``` previousNodeData = $("").all(); for(let i=0; i").all(); for item in previousNodeData: # item is of type # You need to convert it to a Dict itemDict = item.json.to_py() print(itemDict) ``` # `execution` ## `execution.id` Contains the unique ID of the current workflow execution. ``` let executionId = $execution.id; ``` ``` executionId = _execution.id ``` ## `execution.resumeUrl` The webhook URL to call to resume a [waiting](../../../../integrations/builtin/core-nodes/n8n-nodes-base.wait/) workflow. See the [Wait > On webhook call](../../../../integrations/builtin/core-nodes/n8n-nodes-base.wait/#on-webhook-call) documentation to learn more. `execution.resumeUrl` is available in workflows containing a Wait node, along with a node that waits for a webhook response. ## `execution.customData` This is only available in the Code node. ``` // Set a single piece of custom execution data $execution.customData.set("key", "value"); // Set the custom execution data object $execution.customData.setAll({"key1": "value1", "key2": "value2"}) // Access the current state of the object during the execution var customData = $execution.customData.getAll() // Access a specific value set during this execution var customData = $execution.customData.get("key") ``` ``` # Set a single piece of custom execution data _execution.customData.set("key", "value"); # Set the custom execution data object _execution.customData.setAll({"key1": "value1", "key2": "value2"}) # Access the current state of the object during the execution customData = _execution.customData.getAll() # Access a specific value set during this execution customData = _execution.customData.get("key") ``` Refer to [Custom executions data](../../../../workflows/executions/custom-executions-data/) for more information. ______________________________________________________________________ # `getWorkflowStaticData(type)` This gives access to the static workflow data. Experimental feature - Static data isn't available when testing workflows. The workflow must be active and called by a [trigger](../../../../glossary/#trigger-node-n8n) or webhook to save static data. - This feature may behave unreliably under high-frequency workflow executions. You can save data directly in the workflow. This data should be small. As an example: you can save a timestamp of the last item processed from an RSS feed or database. It will always return an object. Properties can then read, delete or set on that object. When the workflow execution succeeds, n8n checks automatically if the data has changed and saves it, if necessary. There are two types of static data, global and node. Global static data is the same in the whole workflow. Every node in the workflow can access it. The node static data is unique to the node. Only the node that set it can retrieve it again. Example with global data: ``` // Get the global workflow static data const workflowStaticData = $getWorkflowStaticData('global'); // Access its data const lastExecution = workflowStaticData.lastExecution; // Update its data workflowStaticData.lastExecution = new Date().getTime(); // Delete data delete workflowStaticData.lastExecution; ``` ``` # Get the global workflow static data workflowStaticData = _getWorkflowStaticData('global') # Access its data lastExecution = workflowStaticData.lastExecution # Update its data workflowStaticData.lastExecution = new Date().getTime() # Delete data delete workflowStaticData.lastExecution ``` Example with node data: ``` // Get the static data of the node const nodeStaticData = $getWorkflowStaticData('node'); // Access its data const lastExecution = nodeStaticData.lastExecution; // Update its data nodeStaticData.lastExecution = new Date().getTime(); // Delete data delete nodeStaticData.lastExecution; ``` ``` # Get the static data of the node nodeStaticData = _getWorkflowStaticData('node') # Access its data lastExecution = nodeStaticData.lastExecution # Update its data nodeStaticData.lastExecution = new Date().getTime() # Delete data delete nodeStaticData.lastExecution ``` ## Templates and examples [View template details](https://n8n.io/workflows/2538-demo-workflow-how-to-use-workflowstaticdata/) # `vars` Feature availability - Available on Self-hosted Enterprise and Pro and Enterprise Cloud plans. - You need access to the n8n instance owner account to create variables. `vars` contains all [Variables](../../../variables/) for the active environment. It's read-only: you can access variables using `vars`, but must set them using the UI. ``` // Access a variable $vars. ``` ``` # Access a variable _vars. ``` `vars` and `env` `vars` gives access to user-created variables. It's part of the [Environments](../../../../source-control-environments/) feature. `env` gives access to the [configuration environment variables](../../../../hosting/configuration/environment-variables/) for your n8n instance. # Code node cookbook This section contains examples and recipes for tasks you can do with the Code node. - [Get number of items returned by last node](/code/cookbook/code-node/number-items-last-node/) - [Get the binary data buffer](/code/cookbook/code-node/get-binary-data-buffer/) - [Output to the browser console](/code/cookbook/code-node/console-log/) ## Related resources - [Built-in methods and variables reference](../../builtin/overview/) - [Code node](../../code-node/) # Output to the browser console with `console.log()` or `print()` in the Code node You can use `console.log()` or `print()` in the Code node to help when writing and debugging your code. For help opening your browser console, refer to [this guide by Balsamiq](https://balsamiq.com/support/faqs/browserconsole/). ## console.log (JavaScript) For technical information on `console.log()`, refer to the [MDN developer docs](https://developer.mozilla.org/en-US/docs/Web/API/Console/log). For example, copy the following code into a Code node, then open your console and run the node: ``` let a = "apple"; console.log(a); ``` ## print (Python) For technical information on `print()`, refer to the [Real Python's guide](https://realpython.com/python-print/). For example, set your Code node **Language** to **Python**, copy the following code into the node, then open your console and run the node: ``` a = "apple" print(a) ``` ### Handling an output of `[object Object]` If the console displays `[object Object]` when you print, check the data type, then convert it as needed. To check the data type: ``` print(type(myData)) ``` #### JsProxy If `type()` outputs ``, you need to convert the JsProxy to a native Python object using `to_py()`. This occurs when working with data in the n8n node data structure, such as node inputs and outputs. For example, if you want to print the data from a previous node in the workflow: ``` previousNodeData = _("").all(); for item in previousNodeData: # item is of type # You need to convert it to a Dict itemDict = item.json.to_py() print(itemDict) ``` Refer to the Pyodide documentation on [JsProxy](https://pyodide.org/en/stable/usage/api/python-api/ffi.html#pyodide.ffi.JsProxy) for more information on this class. # Get the binary data buffer The binary data buffer contains all the binary file data processed by a workflow. You need to access it if you want to perform operations on the binary data, such as: - Manipulating the data: for example, adding column headers to a CSV file. - Using the data in calculations: for example, calculating a hash value based on it. - Complex HTTP requests: for example, combining file upload with sending other data formats. Not available in Python `getBinaryDataBuffer()` isn't supported when using Python. You can access the buffer using n8n's `getBinaryDataBuffer()` function: ``` /* * itemIndex: number. The index of the item in the input data. * binaryPropertyName: string. The name of the binary property. * The default in the Read/Write File From Disk node is 'data'. */ let binaryDataBufferItem = await this.helpers.getBinaryDataBuffer(itemIndex, binaryPropertyName); ``` For example: ``` let binaryDataBufferItem = await this.helpers.getBinaryDataBuffer(0, 'data'); // Returns the data in the binary buffer for the first input item ``` You should always use the `getBinaryDataBuffer()` function, and avoid using older methods of directly accessing the buffer, such as targeting it with expressions like `items[0].binary.data.data`. # Get number of items returned by the previous node To get the number of items returned by the previous node: ``` if (Object.keys(items[0].json).length === 0) { return [ { json: { results: 0, } } ] } return [ { json: { results: items.length, } } ]; ``` The output will be similar to the following. ``` [ { "results": 8 } ] ``` ``` if len(items[0].json) == 0: return [ { "json": { "results": 0, } } ] else: return [ { "json": { "results": items.length, } } ] ``` The output will be similar to the following. ``` [ { "results": 8 } ] ``` # Examples using n8n's HTTP Request node The HTTP Request node is one of the most versatile nodes in n8n. Use this node to make HTTP requests to query data from any app or service with a REST API. Refer to [HTTP Request](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) for information on node settings. - [Pagination](/code/cookbook/http-node/pagination/) ## Related resources - [HTTP Request](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) - [Built-in methods and variables reference](../../builtin/overview/) - [Expressions](../../../data/expressions/) # Pagination in the HTTP Request node The HTTP Request node supports pagination. This page provides some example configurations, including using the [HTTP node variables](../../../builtin/http-node-variables/). Refer to [HTTP Request](../../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) for more information on the node. API differences Different APIs implement pagination in different ways. Check the API documentation for the API you're using for details. You need to find out things like: - Does the API provide the URL for the next page? - Are there API-specific limits on page size or page number? - The structure of the data that the API returns. ## Enable pagination In the HTTP Request node, select **Add Option** > **Pagination**. ## Use a URL from the response to get the next page using `$response` If the API returns the URL of the next page in its response: 1. Set **Pagination Mode** to **Response Contains Next URL**. n8n displays the parameters for this option. 1. In **Next URL**, use an [expression](../../../../glossary/#expression-n8n) to set the URL. The exact expression depends on the data returned by your API. For example, if the API includes a parameter called `next-page` in the response body: ``` {{ $response.body["next-page"] }} ``` ## Get the next page by number using `$pageCount` If the API you're using supports targeting a specific page by number: 1. Set **Pagination Mode** to **Update a Parameter in Each Request**. 1. Set **Type** to **Query**. 1. Enter the **Name** of the query parameter. This depends on your API and is usually described in its documentation. For example, some APIs use a query parameter named `page` to set the page. So **Name** would be `page`. 1. Hover over **Value** and toggle **Expression** on. 1. Enter `{{ $pageCount + 1 }}` `$pageCount` is the number of pages the HTTP Request node has fetched. It starts at zero. Most API pagination counts from one (the first page is page one). This means that adding `+1` to `$pageCount` means the node fetches page one on its first loop, page two on its second, and so on. ## Navigate pagination through body parameters If the API you're using allows you to paginate through the body parameters: 1. Set the HTTP Request Method to **POST** 1. Set **Pagination Mode** to **Update a Parameter in Each Request**. 1. Select **Body** in the **Type** parameter. 1. Enter the **Name** of the body parameter. This depends on the API you're using. `page` is a common key name. 1. Hover over **Value** and toggle **Expression** on. 1. Enter `{{ $pageCount + 1 }}` ## Set the page size in the query If the API you're using supports choosing the page size in the query: 1. Select **Send Query Parameters** in main node parameters (this is the parameters you see when you first open the node, not the settings within options). 1. Enter the **Name** of the query parameter. This depends on your API. For example, a lot of APIs use a query parameter named `limit` to set page size. So **Name** would be `limit`. 1. In **Value**, enter your page size. # Text courses If you've found your way here, it means you're serious about your interest in automation. Maybe you're tired of manually entering data into the same spreadsheet every day, of clicking through a series of tabs and buttons for that one piece of information you need, of managing tens of different tools and systems. Whatever the reason, one thing is clear: you shouldn't spend precious time doing things that don't spark joy or contribute to your personal and professional growth. These tasks can and should be automated! And you don't need advanced technical knowledge or excellent coding skills to do this–with no-code tools like n8n, automation is for everyone. ## Available courses - [Level 1: Beginner course](level-one/) - [Level 2: Intermediate course](level-two/) # Level one: Introduction Welcome to the **n8n Course Level 1**! ## Is this course right for me? This course introduces you to the fundamental concepts within n8n and develops your low-code automation expertise. This course is for you if you: - Are starting to use n8n for the first time. - Are looking for some extra help creating your first workflow. - Want to automate processes in your personal or working life. This course introduces n8n concepts and demonstrates practical workflow building without assuming any prior familiarity with n8n. If you'd like to get a feel for the basics without as much explanation, consult our [quickstart guide](../../try-it-out/tutorial-first-workflow/). ## What will I learn in this course? We believe in learning by doing. You can expect some theoretical information about the basic concepts and components of n8n, followed by practice of building workflows step by step. By the end of this course you will know: - How to set up n8n and navigate the Editor UI. - How n8n structures data. - How to configure different node parameters and add credentials. - When and how to use conditional logic in workflows. - How to schedule and control workflows. - How to import, download, and share workflows with others. You will build two workflows: - A two-node workflow to get articles from Hacker News - A seven-node workflow to help your client get records from a data warehouse, filter them, make calculations, and notify team members about the results ## What do I need to get started? 1. **n8n set up**: You can use [n8n Cloud](../../manage-cloud/overview/) (or the [self-hosted version](../../hosting/installation/docker/) if you have experience hosting services). 1. **A course user ID**: [Sign up here](https://n8n-community.typeform.com/to/PDEMrevI) to get your unique ID and other credentials you will need in this course (Level 1). 1. Basic knowledge of JavaScript and [APIs](https://blog.n8n.io/what-are-apis-how-to-use-them-with-no-code/) would be helpful, but isn't necessary. 1. An [account on the n8n community forum](https://community.n8n.io/) if you wish to receive a profile badge and avatar upon successful completion. ## How long does the course take? Completing the course should take around **two hours**. You don't have to complete it in one go; feel free to take breaks and resume whenever you are ready. ## How do I complete the course? There are two milestones in this course that test your knowledge of what you have learned in the lessons: - Building the [main workflow](chapter-5/chapter-5.1/) - Passing the [quiz](https://n8n-community.typeform.com/to/JMoBXeGA) at the end of the course Check your progress You can always **check your progress** throughout the course by entering your unique ID [here](https://internal.users.n8n.cloud/webhook/course-level-1/verify). If you complete the milestones above, you will get [**a badge and an avatar**](https://community.n8n.io/badges/104/completed-n8n-course-level-1) in your forum profile. You can then share your profile and course verification ID to showcase your n8n skills to others. [Let's get started!](chapter-1/) # Navigating the Editor UI In this lesson you will learn how to navigate the [Editor UI](../../../glossary/#editor-n8n). We will walk through the [canvas](../../../glossary/#canvas-n8n) and show you what each icon means and where to find things you will need while building workflows in n8n. n8n version This course is based on n8n version 1.82.1. In other versions, some user interfaces might look different, but this shouldn't impact the core functionality. ## Getting started Begin by setting up n8n. We recommend starting with [n8n Cloud](https://app.n8n.cloud/register), a hosted solution that doesn't require installation and includes a free trial. Alternative set up If n8n Cloud isn't a good option for you, you can [self-host with Docker](../../../hosting/installation/docker/). This is an advanced option recommended only for technical users familiar with hosting services, Docker, and the command line. For more details on the different ways to set up n8n, see our [platforms documentation](../../../choose-n8n/#platforms). Once you have n8n running, open the Editor UI in a browser window. Log in to your n8n instance. Select **Overview** and then **Create Workflow** to view the main canvas. It should look like this: *Editor UI* ## Editor UI settings The editor UI is the web interface where you build [workflows](../../../workflows/). You can access all your workflows and [credentials](../../../glossary/#credential-n8n), as well as support pages, from the Editor UI. ### Left-side panel On the left side of the **Editor UI**, there is a panel which contains the core functionalities and settings for managing your workflows. Expand and collapse it by selecting the small arrow icon. The panel contains the following sections: - **Overview**: Contains all the workflows, credentials, and executions you have access to. During this course, create new workflows here. - **Personal**: Every user gets a default personal project. If you don’t create a custom project, your workflows and credentials are stored here. - **Projects**: Projects let you group workflows and credentials together. You can assign [roles](../../../user-management/rbac/role-types/) to users in a project to control what they can do. Projects aren’t available on the Community edition. - **Admin Panel**: n8n Cloud only. Access your n8n instance usage, billing, and version settings. - **Templates**: A collection of pre-made workflows. Great place to get started with common use cases. - **Variables**: Used to store and access fixed data across your workflows. This feature is available on the Pro and Enterprise Plans. - **Insights**: Provides analytics and insights about your workflows. - **Help**: Contains resources around n8n product and community. - **What’s New**: Shows the latest product updates and features. *Editor UI left-side menu* ### Top bar The top bar of the **Editor UI** contains the following information: - **Workflow Name**: By default, n8n names a new workflow as "My workflow", but you can edit the name at any time. - **+ Add Tag**: Tags help you organise your workflows by category, use case, or whatever is relevant for you. Tags are optional. - **Publish**: This button publishes the current workflow. By default, workflows are not published. - **Share**: You can share and collaborate with others on workflows on the Starter, Pro, and Enterprise plans. - **Save**: This button saves the current workflow. - **History**: Once you save your workflow, you can view previous versions here. *Editor UI top bar* ### Canvas The **canvas** is the gray dotted grid background in the Editor UI. It displays several icons and a node with different functionalities: - Buttons to zoom the canvas to fit the screen, zoom in or out of the canvas, reset zoom, and tidy up the nodes on screen. - A button to **Execute workflow** once you add your first node. When you click on it, n8n executes all nodes on the canvas in sequence. - A button with a **+** sign inside. This button opens the nodes panel. - A button with a note icon inside. This button adds a [sticky note](../../../workflows/components/sticky-notes/) to the canvas (visible when hovering on the top right + icon). - A button labeled **Ask Assistant** appears on the right side of the canvas. You can ask the AI Assistant for help with building workflows. - A dotted square with the text "Add first step." This is where you add your first node. *Workflow canvas* Moving the canvas You can move the workflow canvas around in three ways: - Select `Ctrl`+`Left Button` on the canvas and move it around. - Select `Middle Button` on the canvas and move it around. - Place two fingers on your touchpad and slide. Don't worry about workflow execution and publishing for now; we'll explain these concepts later on in the course. ## Nodes You can think of nodes as building blocks that serve different functions that, when put together, make up a functioning machine: an automated workflow. Node A node is an individual step in your workflow: one that either (a) loads, (b) processes, or (c) sends data. Based on their function, n8n classifies nodes into four types: - **App** or **Action Nodes** add, remove, and edit data; request and send external data; and trigger events in other systems. Refer to the [Action nodes library](../../../integrations/builtin/app-nodes/) for a full list of these nodes. - **Trigger Nodes** start a workflow and supply the initial data. Refer to the [Trigger nodes library](../../../integrations/builtin/trigger-nodes/) for a list of trigger nodes. - **Core Nodes** can be trigger or app nodes. Whereas most nodes connect to a specific external service, core nodes provide functionality such as logic, scheduling, or generic API calls. Refer to the [Core Nodes library](../../../integrations/builtin/core-nodes/) for a full list of core nodes. - **Cluster Nodes** are node groups that work together to provide functionality in a workflow, primarily for AI workflows. Refer to [Cluster nodes](../../../integrations/builtin/cluster-nodes/) for more information. Learn more Refer to [Node types](../../../integrations/builtin/node-types/) for a more detailed explanation of all node types. ### Finding nodes You can find all available nodes in the **nodes panel** on the right side of the Editor UI. There are three ways in which you can open the nodes panel: - Click the **+** icon in the top right corner of the canvas. - Click the **+** icon on the right side of an existing node on the canvas (the node to which you want to add another one). - Press the `N` key on your keyboard. *Nodes panel* In the nodes panel, notice that when adding your first node, you will see the different trigger node categories. After you have added your trigger node, you'll see that the nodes panel changes to show Advanced AI, Actions in an App, Data transformation, Flow, Core, and Human in the loop nodes. If you want to find a specific node, use the search input at the top of the nodes panel. ### Adding nodes There are two ways to add nodes to your canvas: - Select the node you want in the nodes panel. The new node will automatically connect to the selected node on the canvas. - Drag and drop the node from the nodes panel to the canvas. ### Node buttons If you hover on a node, you'll notice that three icons appear on top: - Execute the node (Play icon) - Deactivate/Activate the node (Power icon) - Delete the node (Trash icon) There will also be an ellipsis icon, which opens a context menu containing other [node options](../../../workflows/components/nodes/#node-controls). Moving a workflow To move a workflow around the canvas, select all nodes with your mouse or `Ctrl`+`A`, select and hold on a node, then drag it to any point you want on the canvas. ## Summary In this lesson you learned how to navigate the Editor UI, what the icons mean, how to access the left-side and node panels, and how to add nodes to the canvas. In the next lesson, you will build a mini-workflow to put into practice what you've learned so far. # Building a Mini-workflow In this lesson, you will build a small [workflow](../../../glossary/#workflow-n8n) that gets 10 articles about automation from Hacker News. The process consists of five steps: 1. [Add a Manual Trigger node](#1-add-a-manual-trigger-node) 1. [Add the Hacker News node](#2-add-the-hacker-news-node) 1. [Configure the Hacker News node](#3-configure-the-hacker-news-node) 1. [Execute the node](#4-execute-the-node) 1. [Save the workflow](#5-save-the-workflow) The finished workflow will look like this: [View workflow file](/_workflows//courses/level-one/chapter-2.json) ## 1. Add a Manual Trigger node Open the nodes panel (reminder: you can open this by selecting the **+** icon in the top right corner of the [canvas](../../../glossary/#canvas-n8n) or pressing `N` on your keyboard). Then: 1. Search for the **Manual Trigger** node. 1. Select it when it appears in the search. This will add the [Manual Trigger](../../../integrations/builtin/core-nodes/n8n-nodes-base.manualworkflowtrigger/) node to your canvas, which allows you to run the workflow at any time by selecting the **Execute workflow** button. Manual triggers For faster workflow creation, you can skip this step in the future. Adding any other node without a trigger will add the Manual Trigger node to the workflow. In a real-world scenario, you would probably want to set up a schedule or some other [trigger](../../../glossary/#trigger-node-n8n) to run the workflow. ## 2. Add the Hacker News node Select the **+** icon to the right of the Manual Trigger node to open the nodes panel. Then: 1. Search for the **Hacker News** node. 1. Select it when it appears in the search. 1. In the **Actions** section, select **Get many items**. n8n adds the node to your canvas and the node window opens to display its configuration details. ## 3. Configure the Hacker News node When you add a new node to the Editor UI, the node is automatically activated. The node details will open in a window with several options: - **Parameters**: Adjust parameters to refine and control the node's functionality. - **Settings**: Adjust settings to control the node's design and executions. - **Docs**: Open the n8n documentation for this node in a new window. Parameters vs. Settings - **Parameters** are different for each node, depending on its functionality. - **Settings** are the same for all nodes. ### Parameters We need to configure several parameters for the Hacker News node to make it work: - **Resource**: All\ This resource selects all data records (articles). - **Operation**: Get Many\ This operation fetches all the selected articles. - **Limit**: 10\ This parameter sets a limit to the number of results the Get Many operation returns. - **Additional Fields** > **Add Field** > **Keyword**: automation\ **Additional fields** are options that you can add to certain nodes to make your request more specific or filter the results. For this example, we want to get only articles that include the keyword "automation." The configuration of the parameters for the Hacker News node should now look like this: *Hacker News node parameters* ### Settings The **Settings** section includes several options for node design and executions. In this case, we'll configure only the final two settings, which set the node's appearance in the Editor UI canvas. In the Hacker News node Settings, edit: - **Notes**: Get the 10 latest articles. Node notes It's often helpful to add a short description in the node about what it does. This is helpful for complex or shared workflows in particular! - **Display note in flow?**: toggle to true\ This option will display the Note under the node in the canvas. The configuration of the settings for the Hacker News node should now look like this: *Hacker News node settings* Renaming a node You can rename the node with a name that's more descriptive for your use case. There are three ways to do this: - Select the node you want to rename and at the same time press the F2 key on your keyboard. - Double-click on the node to open the node window. Click on the name of the node in the top left corner of the window, rename it as you like, then click **Rename** to save the node under the new name. - Right-click on the node and select the **Rename** option. *Renaming a node from the keyboard* To find the original node name (the type of node), open the node window and select **Settings**. The bottom of the page contains the node type and version. ## 4. Execute the node Select the **Execute step** button in the node details window. You should see 10 results in the Output **Table** view. *Results in Table view for the Hacker News node* ### Node executions Node execution A node execution represents a run of that node to retrieve or process the specified data. If a node executes successfully, a small green checkmark appears on top of the node in the canvas *Successfully executed workflow* If there are no problems with the parameters and everything works fine, the requested data displays in the node window in **Table**, **JSON**, and **Schema** format. You can switch between these views by selecting the one you want from the **Table | JSON | Schema** button at the top of the node window. Table vs JSON views The **Table** view is the default. It displays the requested data in a table, where the rows are the records and the columns are the available attributes of those records. Here's our Hacker News output in JSON view: *Results in JSON view for the Hacker News node* The node window displays more information about the node execution: - Next to the **Output** title, notice a small icon (this will be a green checkmark if the node execution succeeded). Beside it, there is an info icon. If you hover on it, you'll get two more pieces of information that can provide insights into the performance of each individual node in a workflow: - **Start Time**: When the node execution started. - **Execution Time**: How long it took for the node to return the results from the moment it started executing. - Just below the **Output** title, you'll notice another piece of information: **10 items**. This field displays the number of items (records) that the node request returned. In this example, it's expected to be 10, since this is the limit we set in step 2. But if you don't set a limit, it's useful to see how many records are actually returned. Error in nodes A red warning icon on a node means that the node has errors. This might happen if the node credentials are missing or incorrect or the node parameters aren't configured correctly. *Error in nodes* ## 5. Save the workflow Once you're finished editing the node, select **Back to canvas** to return to the main canvas. By default, your workflow is automatically saved as "My workflow." For this lesson, rename the workflow to be "Hacker News workflow." Reminder You can rename a workflow by clicking on the workflow's name at the top of the Editor UI. Once you've renamed the workflow, be sure to save it. There are two ways in which you can save a workflow: - From the Canvas in Editor UI, click **Ctrl + S** or **Cmd + S** on your keyboard. - Select the **Save** button in the top right corner of the Editor UI. You may need to leave the node editor first by clicking outside the dialog. If you see a grey **Saved** text instead of the **Save** button, your workflow was automatically saved. ## Summary Congratulations, you just built your first workflow! In this lesson, you learned how to use actions in app nodes, configure their parameters and settings, and save and execute your workflow. In the next lesson, you'll meet your new client, Nathan, who needs to automate his sales reporting work. You will build a more complex workflow for his use case, helping him become more productive at work. # Automating a (Real-world) Use Case Meet Nathan 🙋. Nathan works as an Analytics Manager at ABCorp. His job is to support the ABCorp team with reporting and analytics. Being a true jack of all trades, he also handles several miscellaneous initiatives. Some things that Nathan does are repetitive and mind-numbing. He wants to automate some of these tasks so that he doesn't burn out. As an **Automation Expert**, you are meeting with Nathan today to help him understand how he can offload some of his responsibilities to n8n. ## Understanding the scenario **You 👩‍🔧:** Nice to meet you, Nathan. Glad to be doing this! What's a repetitive task that's error-prone and that you'd like to get off your plate first? **Nathan 🙋:** Thanks for coming in! The most annoying one's gotta be the weekly sales reporting. I have to collect sales data from our legacy data warehouse, which manages data from the main business processes of an organization, such as sales or production. Now, each sales order can have the status Processing or Booked. I have to calculate the sum of all the Booked orders and announce them in the company Discord every Monday. Then I have to create a spreadsheet of all the Processing sales so that the Sales Managers can review them and check if they need to follow up with customers. This manual work is tough and requires high attention to detail to make sure that all the numbers are right. Inevitably, I lose my focus and mistype a number or I don't get it done on time. I've been criticized once by my manager for miscalculating the data. **You 👩‍🔧:** Oh no! Doesn't the data warehouse have a way to export the data? **Nathan 🙋:** The data warehouse was written in-house ages ago. It doesn't have a CSV export but they recently added a couple of API endpoints that expose this data, if that helps. **You 👩‍🔧:** Perfect! That's a good start. If you have a generic API, we can add some custom code and a couple of services to make an automated workflow. This gig has n8n written all over it. Let's get started! # Designing the Workflow Now that we know what Nathan wants to automate, let's consider the steps he needs to take to achieve his goals: 1. Get the relevant data (order id, order status, order value, employee name) from the data warehouse 1. Filter the orders by their status (Processing or Booked) 1. Calculate the total value of all the Booked orders 1. Notify the team members about the Booked orders in the company's Discord channel 1. Insert the details about the Processing orders in Airtable for follow-up 1. Schedule this workflow to run every Monday morning Nathan's workflow involves sending data from the company's data warehouse to two external services: - Discord - Airtable Before that, the data has to be wrangled with general functions (conditional filtering, calculation, scheduling). n8n provides integrations for all these steps, so Nathan's workflow in n8n would look like this: [View workflow file](/_workflows//courses/level-one/finished.json) You will build this workflow in eight steps: 1. [Getting data from the data warehouse](../chapter-5/chapter-5.1/) 1. [Inserting data into Airtable](../chapter-5/chapter-5.2/) 1. [Filtering orders](../chapter-5/chapter-5.3/) 1. [Setting values for processing orders](../chapter-5/chapter-5.4/) 1. [Calculating booked orders](../chapter-5/chapter-5.5/) 1. [Notifying the team](../chapter-5/chapter-5.6/) 1. [Scheduling the workflow](../chapter-5/chapter-5.7/) 1. [Publishing and examining the workflow](../chapter-5/chapter-5.8/) To build this workflow, you will need the credentials found in the email you received from n8n when you signed up for this course. If you haven't signed up already, you can do it [here](https://n8n-community.typeform.com/to/PDEMrevI?typeform-source=127.0.0.1). If you haven't received a confirmation email after signing up, [contact us](mailto:help@n8n.io). [Start building!](../chapter-5/chapter-5.1/) # Exporting and importing workflows In this chapter, you will learn how to export and import workflows. ## Exporting and importing workflows You can save n8n workflows locally as JSON files. This is useful if you want to share your workflow with someone else or import a workflow from someone else. Sharing credentials Exported workflow JSON files include [credential](../../../glossary/#credential-n8n) names and IDs. While IDs aren't sensitive, the names could be, depending on how you name your credentials. HTTP Request nodes may contain authentication headers when imported from cURL. Remove or anonymize this information from the JSON file before sharing to protect your credentials. *Import & Export workflows menu* You can export and import workflows in three ways: - From the **Editor UI** menu: - Export: From the top navigation bar, select the three dots in the upper right, then select **Download**. This will download your current workflow as a JSON file on your computer. - Import: From the top navigation bar, select the three dots in the upper right, then select **Import from URL** (to import a published workflow) or **Import from File** (to import a workflow as a JSON file). - From the **Editor UI** canvas: - Export: Select all the nodes on the canvas and use `Ctrl`+`C` to copy the workflow JSON. You can paste this into a file or share it directly with other people. - Import: You can paste a copied workflow JSON directly into the canvas with `Ctrl`+`V`. - From the command line: - Export: See the [full list of commands](../../../hosting/cli-commands/) for exporting workflows or credentials. - Import: See the [full list of commands](../../../hosting/cli-commands/#import-workflows-and-credentials) for importing workflows or credentials. # Test your knowledge Congratulations, you finished the n8n Course Level 1! You've learned a lot about workflow automation and built your first business workflow. Why not showcase your skills? You can test your knowledge by taking a **quiz**, which consists of questions about the theoretical concepts and workflows covered in this course. - You need to have at least 80% correct answers in each part to pass the quiz. - You can take the quiz as many times as you want. - There's no time limit on answering the quiz questions. [Take the quiz!](https://n8n-community.typeform.com/to/JMoBXeGA) ## What's next? - Create new workflows for your work or personal use and share them with us. Don't have any ideas? Find inspiration on our [blog](https://n8n.io/blog/), [YouTube channel](https://www.youtube.com/c/n8n-io), [community forum](https://community.n8n.io), and [Discord server](https://discord.gg/vWwMVThRta). - Take the n8n [Course Level 2](../../level-two/). # 1. Getting data from the data warehouse In this part of the workflow, you will learn how to get data by making HTTP requests with the [**HTTP Request**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node. After completing this section, your workflow will look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.1.json) First, let's set the scene for building Nathan's workflow. ## Create new workflow Open your Editor UI and create a new workflow with one of the two possible commands: - Select `Ctrl`+`Alt`+`N` or `Cmd`+`Option`+`N` on your keyboard. - Open the left menu, navigate to **Workflows**, and select **Add workflow**. Name this new workflow "Nathan's workflow." The first thing you need to do is get data from ABCorp's old data warehouse. In a previous chapter, you used an action node designed for a specific service (Hacker News). But not all apps or services have dedicated nodes, like the legacy data warehouse from Nathan's company. Though we can't directly export the data, Nathan told us that the data warehouse has a couple of API endpoints. That's all we need to access the data using the [HTTP Request](../../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) node in n8n. No node for that service? The HTTP Request node is one of the most versatile nodes, allowing you to make HTTP requests to query data from apps and services. You can use it to access data from apps or services that don't have a dedicated node in n8n. ## Add an HTTP Request node Now, in your Editor UI, add an HTTP Request node like you learned in the lesson [Adding nodes](../../chapter-1/#adding-nodes). The node window will open, where you need to configure some parameters. *HTTP Request node* This node will use credentials. Credentials [Credentials](../../../../glossary/#credential-n8n) are unique pieces of information that identify a user or a service and allow them to access apps or services (in our case, represented as n8n nodes). A common form of credentials is a username and a password, but they can take other forms depending on the service. In this case, you'll need the credentials for the ABCorp data warehouse API included in the email from n8n you received when you signed up for this course. If you haven't signed up yet, [sign up here](https://n8n-community.typeform.com/to/PDEMrevI). In the **Parameters** of the HTTP Request node, make the following adjustments: - **Method**: This should default to GET. Make sure it's set to GET. - **URL**: Add the **Dataset URL** you received in the email when you signed up for this course. - **Send Headers**: Toggle this control to true. In **Specify Headers**, ensure **Using Fields Below** is selected. - **Header Parameters** > **Name**: Enter `unique_id`. - **Header Parameters** > **Value**: The Unique ID you received in the email when you signed up for this course. - **Authentication**: Select **Generic Credential Type**. This option requires credentials before allowing you to access the data. - **Generic Auth Type**: Select **Header Auth**. (This field will appear after you select the Generic Credential Type for the Authentication.) - **Credential for Header Auth**: To add your credentials, select **+ Create new credential**. This will open the Credentials window. - In the Credentials window, set **Name** to be the **Header Auth name** you received in the email when you signed up for this course. - In the Credentials window, set **Value** to be the **Header Auth value** you received in the email when you signed up for this course. - Select the **Save** button in the Credentials window to save your credentials. Your **Credentials Connection** window should look like this: *HTTP Request node credentials* Credentials naming New credential names follow the " account" format by default. You can rename the credentials by clicking on the name, similarly to renaming nodes. It's good practice to give them names that identify the app/service, type, and purpose of the credential. A naming convention makes it easier to keep track of and identify your credentials. Once you save, exit out of the Credentials window to return to the HTTP Request node. ## Get the data Select the **Execute step** button in the HTTP Request node window. The table view of the HTTP request results should look like this: *HTTP Request node output* This view should be familiar to you from the [Building a mini-workflow](../../chapter-2/) page. This is the data from ABCorp's data warehouse that Nathan needs to work with. This data set includes sales information from 30 customers with five columns: - `orderID`: The unique id of each order. - `customerID`: The unique id of each customer. - `employeeName`: The name of Nathan's colleague responsible for the customer. - `orderPrice`: The total price of the customer's order. - `orderStatus`: Whether the customer's order status is `booked` or still in `processing`. ## What's next? **Nathan 🙋**: This is great! You already automated an important part of my job with only one node. Now instead of manually accessing the data every time I need it, I can use the HTTP Request Node to automatically get the information. **You 👩‍🔧**: Exactly! In the next step, I'll help you one step further and insert the data you retrieved into Airtable. # 2. Inserting data into Airtable In this step of the workflow, you will learn how to insert the data received from the HTTP Request node into Airtable using the [Airtable node](../../../../integrations/builtin/app-nodes/n8n-nodes-base.airtable/). Spreadsheet nodes You can replace the Airtable node with another spreadsheet app/service. For example, n8n also has a node for [**Google Sheets**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.googlesheets/). After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.2.json) ## Configure your table If we're going to insert data into Airtable, we first need to set up a table there. To do this: 1. [Create an Airtable account](https://airtable.com/signup). 1. In your Airtable workspace add a new base from scratch and name it, for example, *beginner course*. *Create an Airtable base* 1. In the beginner course base, by default, you have a table called **Table 1** with four fields: `Name`, `Notes`, `Assignee`, and `Status`. These fields aren't relevant for us since they aren't in our "orders" data set. This brings us to the next point: the names of the fields in Airtable have to match the names of the columns in the node result. Prepare the table by doing the following: - Rename the table from **Table 1** to **orders** to make it easier to identify. - Delete the 3 blank records created by default. - Delete the `Notes`, `Assignee`, and `Status` fields. - Edit the `Name` field (the primary field) to read `orderID`, with the **Number** field type. - Add the rest of the fields, and their field types, using the table below as a reference: | Field name | Field type | | -------------- | ---------------- | | `orderID` | Number | | `customerID` | Number | | `employeeName` | Single line text | | `orderPrice` | Number | | `orderStatus` | Single line text | Now your table should look like this: *Orders table in Airtable* Now that the table is ready, let's return to the workflow in the n8n Editor UI. ## Add an Airtable node to the HTTP Request node Add an Airtable node connected to the HTTP Request node. Remember You can add a node connected to an existing node by selecting the **+** icon next to the existing node. In the node panel: 1. Search for Airtable. 1. Select **Create a record** from the **Record Actions** search results. This will add the Airtable node to your canvas and open the node details window. In the Airtable node window, configure the following parameters: - **Credential to connect with**: - Select **Create new credential**. - Keep the default option **Connect using: Access Token** selected. - **Access token**: Follow the instructions from the [Airtable credential](../../../../integrations/builtin/credentials/airtable/) page to create your token. Use the recommended scopes and add access to your beginners course base. Save the credential and close the Credential window when you're finished. - **Resource**: Record. - **Operation**: Create. This operation will create new records in the table. - **Base**: You can pick your base from a list (for example, beginner course). - **Table**: orders. - **Mapping Column Mode**: Map automatically. In this mode, the incoming data fields must have the same as the columns in Airtable. ## Test the Airtable node Once you've finished configuring the Airtable node, execute it by selecting **Execute step**. This might take a moment to process, but you can follow the progress by viewing the base in Airtable. Your results should look like this: *Airtable node results* All 30 data records will now appear in the orders table in Airtable: *Imported records in the orders table* ## What's next? **Nathan 🙋**: Wow, this automation is already so useful! But this inserts all collected data from the HTTP Request node into Airtable. Remember that I actually need to insert only processing orders in the table and calculate the price of booked orders? **You 👩‍🔧**: Sure, no problem. As a next step, I'll use a new node to filter the orders based on their status. # 3. Filtering Orders In this step of the workflow, you will learn how to filter data using conditional logic and how to use expressions in nodes using the [If node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.if/). After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.3.json) To insert only processing orders into Airtable we need to filter our data by `orderStatus`. Basically, we want to tell the program that *if* the `orderStatus` is processing, *then* insert all records with this status into Airtable; *else*, for example, if the `orderStatus` isn't *processing*, calculate the sum of all orders with the other `orderStatus` (`booked`). This if-then-else command is conditional logic. In n8n workflows, you can add conditional logic with the [If node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.if/), which splits a workflow conditionally based on comparison operations. If vs. Switch If you need to filter data on more than boolean values (true and false), use the [Switch node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.switch/). The Switch node is similar to the If node, but supports multiple output connectors. ## Add If node before the Airtable node First, let's add an If node between the connection from the HTTP Request node to the Airtable node: 1. Hover over the arrow connection the **HTTP Request** node and the **Airtable** node. 1. Select the **+** sign between the HTTP Request node and the Airtable node. ## Configure the If node Selecting the plus removes the connection to the Airtable node to the HTTP request. Now, let's add an If node connected to the HTTP Request node: 1. Search for the If node. 1. Select it when it appears in the search. For the If node, we'll use an expression. Expressions An [expression](../../../../glossary/#expression-n8n) is a string of characters and symbols in a programming language that can be evaluated to get a value, often according to its input. In n8n workflows, you can use expressions in a node to refer to another node for input data. In our example, the If node references the data output by the HTTP Request node. In the If node window, configure the parameters: - Set the `value1` placeholder to `{{ $json.orderStatus }}` with the following steps: 1. Hover over the value1 field. 1. Select the **Expression** tab on the right side of the `value1` field. 1. Next, open the expression editor by selecting the link icon: *Opening the Expression Editor* 1. Use the left-side panel to select **HTTP Request** > **orderStatus** and drag it into the **Expression** field in the center of the window. *Expression Editor in the If node* 1. Once you add the expression, close the **Edit Expression** dialog. - **Operation**: Select **String** > **is equal to** - Set the `value2` placeholder to `processing`. Data Type Make sure to select the correct data type (boolean, date & time, number, or string) when you select the **Operation**. Select **Execute step** to test the If node. Your results should look like this: *If node output* Note that the orders with a `processing` order status should show for the **True Branch** output, while the orders with a `booked` order status should show in the **False Branch** output. Close the If node detail view when you're finished. ## Insert data into Airtable Next, we want to insert this data into Airtable. Remember what Nathan said at the end of the [Inserting data into Airtable](../chapter-5.2/) lesson? > I actually need to insert only processing orders in the table... Since Nathan only needs the `processing` orders in the table, we'll connect the Airtable node to the If node's `true` connector. In this case, since the Airtable node is already on our canvas, select the **If node** `true` connector and drag it to the Airtable node. It's a good idea at this point to retest the Airtable node. Before you do, open your table in Airtable and delete all existing rows. Then open the Airtable node window in n8n and select **Execute step**. Review your data in Airtable to be sure your workflow only added the correct orders (those with `orderStatus` of `processing`). There should be 14 records now instead of 30. At this stage, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.3.json) ## What's next? **Nathan 🙋**: This If node is so useful for filtering data! Now I have all the information about processing orders. I actually only need the `employeeName` and `orderID`, but I guess I can keep all the other fields just in case. **You 👩‍🔧**: Actually, I wouldn't recommend doing that. Inserting more data requires more computational power, the data transfer is slower and takes longer, and takes up more storage resources in your table. In this particular case, 14 records with 5 fields might not seem like it'd make a significant difference, but if your business grows to thousands of records and dozens of fields, things add up and even one extra column can affect performance. **Nathan 🙋**: Oh, that's good to know. Can you select only two fields from the processing orders? **You 👩‍🔧**: Sure, I'll do that in the next step. # 4. Setting Values for Processing Orders In this step of the workflow, you will learn how to select and set data before transferring it to Airtable using the Edit Fields (Set) node. After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.4.json) The next step in Nathan's workflow is to filter the data to only insert the `employeeName` and `orderID` of all `processing` orders into Airtable. For this, you need to use the [Edit Fields (Set) node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.set/), which allows you to select and set the data you want to transfer from one node to another. Edit Fields node The Edit Fields node can set completely new data as well as overwrite data that already exists. This node is crucial in workflows which expect incoming data from previous nodes, such as when inserting values into spreadsheets or databases. ## Add another node before the Airtable node In your workflow, add another node before the **Airtable node** from the **If node** in the same way we did it in the [Filtering Orders](../chapter-5.3/#add-if-node-before-the-airtable-node) lesson on the If node's `true` connector. Feel free to drag the Airtable node further away if your canvas feels crowded. ## Configure the Edit Fields node Now search for the **Edit Fields (Set) node** after you've selected the **+** sign coming off the If node's `true` connector. With the Edit Fields node window open, configure these parameters: - Ensure **Mode** is set to **Manual Mapping**. - While you can use the **Expression editor** we used in the [Filtering Orders](../chapter-5.3/) lesson, this time, let's drag the fields from the **Input** into the **Fields to Set**: - Drag **If** > **orderID** as the first field. - Drag **If** > **employeeName** as the second field. - Ensure that **Include Other Input Fields** is set to false. Select **Execute step**. You should see the following results: *Edit Fields (Set) node* ## Add data to Airtable Next, let's insert these values into Airtable: 1. Go to your Airtable base. 1. Add a new table called `processingOrders`. 1. Replace the existing columns with two new columns: - `orderID` (primary field): Number - `employeeName`: Single line text Reminder If you get stuck, refer to the [Inserting data into Airtable](../chapter-5.2/) lesson. 1. Delete the three empty rows in the new table. 1. In n8n, connect the Edit Fields node **connector to the** Airtable node\*\*. 1. Update the Airtable node configuration to point to the new `processingOrders` table instead of the `orders` table. 1. Test your Airtable node to be sure it inserts records into the new `processingOrders` table. At this stage, your workflow should now look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.4.json) ## What's next? **Nathan 🙋**: You've already automated half of my work! Now I still need to calculate the booked orders for my colleagues. Can we automate that as well? **You 👩‍🔧**: Yes! In the next step, I'll use some JavaScript code in a node to calculate the booked orders. # 5. Calculating Booked Orders In this step of the workflow you will learn how n8n structures data and how to add custom JavaScript code to perform calculations using the Code node. After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.5.json) The next step in Nathan's workflow is to calculate two values from the booked orders: - The total number of booked orders - The total value of all booked orders To calculate data and add more functionality to your workflows you can use the Code node, which lets you write custom JavaScript code. ## About the Code node Code node modes The Code node has two operational **modes**, depending on how you want to process items: - **Run Once for All Items** allows you to write code to process all input items at once, as a group. - **Run Once for Each Item** executes your code once for each input item. Learn more about how to use the [Code node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.code/). In n8n, the data that's passed between nodes is an array of objects with the following JSON structure: ``` [ { "json": { // (1)! "apple": "beets", "carrot": { "dill": 1 } }, "binary": { // (2)! "apple-picture": { // (3)! "data": "....", // (4)! "mimeType": "image/png", // (5)! "fileExtension": "png", // (6)! "fileName": "example.png", // (7)! } } }, ... ] ``` 1. (required) n8n stores the actual data within a nested `json` key. This property is required, but can be set to anything from an empty object (like `{}`) to arrays and deeply nested data. The code node automatically wraps the data in a `json` object and parent array (`[]`) if it's missing. 1. (optional) Binary data of item. Most items in n8n don't contain binary data. 1. (required) Arbitrary key name for the binary data. 1. (required) Base64-encoded binary data. 1. (optional) Should set if possible. 1. (optional) Should set if possible. 1. (optional) Should set if possible. You can learn more about the expected format on the [n8n data structure](../../../../data/data-structure/) page. ## Configure the Code node Now let's see how to accomplish Nathan's task using the Code node. In your workflow, add a **Code node** connected to the `false` branch of the **If node**. With the Code node window open, configure these parameters: - **Mode**: Select **Run Once for All Items**. - **Language**: Select **JavaScript**. Using Python in code nodes While we use JavaScript below, you can also use Python in the Code node. To learn more, refer to the [Code node](../../../../code/code-node/) documentation. - Copy the Code below and paste it into the **Code** box to replace the existing code: ``` let items = $input.all(); let totalBooked = items.length; let bookedSum = 0; for (let i=0; i < items.length; i++) { bookedSum = bookedSum + items[i].json.orderPrice; } return [{ json: {totalBooked, bookedSum} }]; ``` Notice the format in which we return the results of the calculation: ``` return [{ json: {totalBooked, bookedSum} }] ``` Data structure error If you don't use the correct data structure, you will get an error message: `Error: Always an Array of items has to be returned!` Now select **Execute step** and you should see the following results: *Code node output* ## What's next? **Nathan 🙋**: Wow, the Code node is powerful! This means that if I have some basic JavaScript skills I can power up my workflows. **You 👩‍🔧**: Yes! You can progress from no-code to low-code! **Nathan 🙋**: Now, how do I send the calculations for the booked orders to my team's Discord channel? **You 👩‍🔧**: There's an n8n node for that. I'll set it up in the next step. # 6. Notifying the Team In this step of the workflow, you will learn how to send messages to a Discord channel using the [Discord node](../../../../integrations/builtin/app-nodes/n8n-nodes-base.discord/). After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/chapter-5/chapter-5.6.json) Now that you have a calculated summary of the booked orders, you need to notify Nathan's team in their Discord channel. For this workflow, you will send messages to the [n8n server](https://discord.gg/G98WXzsjky) on Discord. Before you begin the steps below, use the link above to connect to the n8n server on Discord. Be sure you can access the `#course-level-1` channel. Communication app nodes You can replace the Discord node with another communication app. For example, n8n also has nodes for [Slack](../../../../integrations/builtin/app-nodes/n8n-nodes-base.slack/) and [Mattermost](../../../../integrations/builtin/app-nodes/n8n-nodes-base.mattermost/). In your workflow, add a Discord node connected to the Code node. When you search for the Discord node, look for **Message Actions** and select **Send a message** to add the node. In the Discord node window, configure these parameters: - **Connection Type**: Select **Webhook**. - **Credential for Discord Webhook**: Select **- Create New Credential -**. - Copy the **Webhook URL** from the email you received when you signed up for this course and paste it into the **Webhook URL** field of the credentials. - Select **Save** and then close the credentials dialog. - **Operation**: Select **Send a Message**. - **Message**: - Select the **Expression** tab on the right side of the Message field. - Copy the text below and paste it into the **Expression** window, or construct it manually using the **Expression Editor**. ``` This week we've {{$json["totalBooked"]}} booked orders with a total value of {{$json["bookedSum"]}}. My Unique ID: {{ $('HTTP Request').params["headerParameters"]["parameters"][0]["value"] }} ``` Now select **Execute step** in the Discord node. If all works well, you should see this output in n8n: *Discord node output* And your message should appear in the Discord channel #course-level-1: *Discord message* ## What's next? **Nathan 🙋**: Incredible, you've saved me hours of tedious work already! Now I can execute this workflow when I need it. I just need to remember to run it every Monday morning at 9 AM. **You 👩‍🔧**: Don't worry about that, you can actually schedule the workflow to run on a specific day, time, or interval. I'll set this up in the next step. # 7. Scheduling the Workflow In this step of the workflow, you will learn how to schedule your workflow so that it runs automatically at a set time/interval using the Schedule Trigger node. After this step, your workflow should look like this: [View workflow file](/_workflows//courses/level-one/finished.json) The workflow you've built so far executes only when you click on **Execute Workflow**. But Nathan needs it to run automatically every Monday morning. You can do this with the [Schedule Trigger](../../../../integrations/builtin/core-nodes/n8n-nodes-base.scheduletrigger/), which allows you to schedule workflows to run periodically at fixed dates, times, or intervals. To achieve this, we'll remove the Manual Trigger node we started with and replace it with a Schedule Trigger node instead. ## Remove the Manual Trigger node First, let's remove the Manual Trigger node: 1. Select the Manual Trigger node connected to your HTTP Request node. 1. Select the trash can icon to delete. This removes the Manual Trigger node and you'll see an "Add first step" option. ## Add the Schedule Trigger node 1. Open the nodes panel and search for **Schedule Trigger**. 1. Select it when it appears in the search results. In the Schedule Trigger node window, configure these parameters: - **Trigger Interval**: Select **Weeks**. - **Weeks Between Triggers**: Enter `1`. - **Trigger on weekdays**: Select **Monday** (and remove **Sunday** if added by default). - **Trigger at Hour**: Select **9am**. - **Trigger at Minute**: Enter `0`. Your Schedule Trigger node should look like this: *Schedule Trigger Node* Keep in mind To ensure accurate scheduling with the Schedule Trigger node, be sure to set the correct timezone for your [n8n instance](../../../../manage-cloud/set-cloud-timezone/) or the [workflow's settings](../../../../workflows/settings/). The Schedule Trigger node will use the workflow's timezone if it's set; it will fall back to the n8n instance's timezone if it's not. ## Connect the Schedule Trigger node Return to the canvas and connect your Schedule Trigger node to the HTTP Request node by dragging the arrow from it to the HTTP Request node. Your full workflow should look like this: [View workflow file](/_workflows//courses/level-one/finished.json) ## What's next? **You 👩‍🔧**: That was it for the workflow! I've added and configured all necessary nodes. Now every time you click on **Execute workflow**, n8n will execute all the nodes: getting, filtering, calculating, and transferring the sales data. **Nathan 🙋**: This is just what I needed! My workflow will run automatically every Monday morning, correct? **You 👩‍🔧**: Not so fast. To do that, you need to publish your workflow. I'll do this in the next step and show you how to interpret the execution log. # 8. Publishing and Examining the Workflow In this step of the workflow, you will learn how to publish your workflow and change the default workflow settings. Publishing a workflow means that it will run automatically every time a trigger node receives input or meets a condition. By default, newly created workflows are not published. To publish your workflow, click **Publish** in the top navigation of the Editor UI. Nathan's workflow will now be executed automatically every Monday at 9 AM: *Published workflow* ## Workflow Executions An execution represents a completed run of a workflow, from the first to the last node. n8n logs workflow executions, allowing you to see if the workflow succeeded or not. The execution log is useful for debugging your workflow and seeing at what stage it runs into issues. To view the executions for a specific workflow, you can switch to the **Executions** tab when the workflow is open on the canvas. Use the **Editor** tab to swap back to the node editor. To see the execution log for the entire n8n instance, in your Editor UI, select **Overview** and then select the **Executions** tab in the main panel. *Execution List* The **Executions** window displays a table with the following information: - **Name**: The name of the workflow - **Started At**: The date and time when the workflow started - **Status**: The status of the workflow (Waiting, Running, Succeeded, Cancelled, or Failed) and the amount of time it took the workflow to execute - **Execution ID**: The ID of this workflow execution Workflow execution status You can filter the displayed **Executions** by workflow and by status (**Any Status**, **Failed**, **Cancelled**, **Running**, **Success**, or **Waiting**). The information displayed here depends on which executions you configure to save in the [**Workflow Settings**](../../../../workflows/settings/). ## Workflow Settings You can customize your workflows and executions, or overwrite some global default settings in [**Workflow Settings**](../../../../workflows/settings/). Access these settings by selecting the three dots in the upper right corner of the Editor UI when the workflow is open on the canvas, then select **Settings**. *Workflow Settings* In the **Workflow Settings** window you can configure the following settings: - **Execution Order**: Choose the execution logic for multi-branch workflows. You should leave this set to `v1` if you don't have workflows that rely on the legacy execution ordering. - [**Error Workflow**](../../../../flow-logic/error-handling/): A workflow to run if the execution of the current workflow fails. - **This workflow can be called by**: Workflows allowed to call this workflow using the [Execute Sub-workflow node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/). - **Timezone**: The timezone to use in the current workflow. If not set, the global timezone. In particular, this setting is important for the [Schedule Trigger node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.scheduletrigger/), as you want to make sure that the workflow gets executed at the right time. - **Save failed production executions**: If n8n should save the Execution data of the workflow when it fails. Default is to save. - **Save successful production executions**: If n8n should save the Execution data of the workflow when it succeeds. Default is to save. - **Save manual executions**: If n8n should save executions started from the Editor UI. Default is to save. - **Save execution progress**: If n8n should save the execution data of each node. If set to Save, you can resume the workflow from where it stopped in case of an error, though keep in mind that this might make the execution slower. Default is to not save. - **Timeout Workflow**: Whether to cancel a workflow execution after a specific period of time. Default is to not timeout. ## What's next? **You 👩‍🔧**: That was it! Now you have a 7-node workflow that will run automatically every Monday morning. You don't have to worry about remembering to wrangle the data. Instead, you can start your week with more meaningful or exciting work. **Nathan 🙋**: This workflow is incredibly helpful, thank you! Now, what's next for you? **You 👩‍🔧**: I'd like to build more workflows, share them with others, and use some workflows built by other people. # Level two: Introduction Welcome to the **n8n Course Level 2**! ## Is this course right for me? This course is for you if you: - Want to automate somewhat complex business processes. - Want to dive deeper into n8n after taking the [Level 1 course](../level-one/). ## What will I learn in this course? The focus in this course is on working with data. You will learn how to: - Use the data structure of n8n correctly. - Process different data types (for example, XML, HTML, date, time, and binary data). - Merge data from different sources (for example, a database, spreadsheet, or CRM). - Use functions and JavaScript code in the [Code node](../../integrations/builtin/core-nodes/n8n-nodes-base.code/). - Deal with error workflows and workflow errors. You will learn all this by completing short practical exercises after the theoretical explanations and building a business workflow following instructions. ## What do I need to get started? To follow along this course (at a comfortable pace) you will need the following: - **n8n set up**: You can use the [self-hosted version](../../hosting/installation/npm/) or [n8n Cloud](../../manage-cloud/overview/). - **A user ID**: [Sign up here](https://n8n-community.typeform.com/to/HQoQ7nXg) to get your unique ID and other credentials you will need in this course (Level 2). If you're a Level 1 finisher, please sign up again as you'll get different credentials for the Level 2 workflows. - **Basic n8n skills**: We strongly recommend taking the [Level 1 course](../level-one/) before this one. - **Basic JavaScript understanding** ## How long does the course take? Completing the course should take around **two hours**. You don't have to complete it in one go; feel free to take breaks and resume whenever you are ready. ## How do I complete the course? There are two milestones in this course that test your knowledge of what you have learned in the lessons: - Building the [main workflow](chapter-5/chapter-5.0/) - Passing the [quiz](https://n8n-community.typeform.com/to/r9hDbytg) at the end of the course You can always **check your progress** throughout the course by entering your unique ID [here](https://internal.users.n8n.cloud/webhook/course-level-2/verify). If you successfully complete the milestones above, you will get [**a badge and an avatar**](https://community.n8n.io/badges/105/completed-n8n-course-level-2) in your forum profile. You can then share your profile and course verification ID to showcase your n8n skills to others. [Let's get started!](chapter-1/) # Understanding the data structure In this chapter, you will learn about the data structure of n8n and how to use the [Code node](../../../integrations/builtin/core-nodes/n8n-nodes-base.code/) to transform data and simulate node outputs. ## Data structure of n8n In a basic sense, n8n nodes function as an Extract, Transform, Load (ETL) tool. The nodes allow you to access (extract) data from multiple disparate sources, modify (transform) that data in a particular way, and pass (load) it along to where it needs to be. The data that moves along from node to node in your workflow must be in a format (structure) that can be recognized and interpreted by each node. In n8n, this required structure is an array of objects. About array of objects An array is a list of values. The array can be empty or contain several elements. Each element is stored at a position (index) in the list, starting at 0, and can be referenced by the index number. For example, in the array `["Leonardo", "Michelangelo", "Donatello", "Raphael"];` the element `Donatello` is stored at index 2. An object stores key-value pairs, instead of values at numbered indexes as in arrays. The order of the pairs isn't important, as the values can be accessed by referencing the key name. For example, the object below contains two properties (`name` and `color`): ``` { name: 'Michelangelo', color: 'blue', } ``` An array of objects is an array that contains one or more objects. For example, the array `turtles` below contains four objects: ``` var turtles = [ { name: 'Michelangelo', color: 'orange', }, { name: 'Donatello', color: 'purple', }, { name: 'Raphael', color: 'red', }, { name: 'Leonardo', color: 'blue', } ]; ``` You can access the properties of an object using dot notation with the syntax `object.property`. For example, `turtles[1].color` gets the color of the second turtle. Data sent from one node to another is sent as an array of JSON objects. The elements in this collection are called items. *Items* An n8n node performs its action on each item of incoming data. *Items in the Customer Datastore node* ## Creating data sets with the Code node Now that you are familiar with the n8n data structure, you can use it to create your own data sets or simulate node outputs. To do this, use the [Code node](../../../integrations/builtin/core-nodes/n8n-nodes-base.code/) to write JavaScript code defining your array of objects with the following structure: ``` return [ { json: { apple: 'beets', } } ]; ``` For example, the array of objects representing the Ninja turtles would look like this in the Code node: *Array of objects in the Code node* JSON objects Notice that this array of objects contains an extra key: `json`. n8n expects you to wrap each object in an array in another object, with the key `json`. *Illustration of data structure in n8n* It's good practice to pass the data in the right structure used by n8n. But don't worry if you forget to add the `json` key to an item, n8n (version 0.166.0 and above) adds it automatically. You can also have nested pairs, for example if you want to define a primary and a secondary color. In this case, you need to further wrap the key-value pairs in curly braces `{}`. n8n data structure video [This talk](https://www.youtube.com/watch?v=mQHT3Unn4tY) offers a more detailed explanation of data structure in n8n. ### Exercise In a Code node, create an array of objects named `myContacts` that contains the properties `name` and `email`, and the `email` property is further split into `personal` and `work`. Show me the solution In the **Code node**, in the JavaScript Code field you have to write the following code: ``` var myContacts = [ { json: { name: 'Alice', email: { personal: 'alice@home.com', work: 'alice@wonderland.org' }, } }, { json: { name: 'Bob', email: { personal: 'bob@mail.com', work: 'contact@thebuilder.com' }, } }, ]; return myContacts; ``` When you execute the **Code node**, the result should look like this: *Result of Code node* ## Referencing node data with the Code node Just like you can use [expressions](../../../data/expressions/) to reference data from other nodes, you can also use some [methods and variables](../../../code/builtin/overview/) in the **Code node**. Please make sure you read these pages before continuing to the next exercise. ### Exercise Let's build on the previous exercise, in which you used the Code node to create a data set of two contacts with their names and emails. Now, connect a second Code node to the first one. In the new node, write code to create a new column named `workEmail` that references the work email of the first contact. Show me the solution In the **Code node**, in the JavaScript Code field you have to write the following code: ``` let items = $input.all(); items[0].json.workEmail = items[0].json.email['work']; return items; ``` When you execute the **Code node**, the result should look like this: *Code node reference* ## Transforming data The incoming data from some nodes may have a different data structure than the one used in n8n. In this case, you need to [transform the data](../../../data/transforming-data/), so that each item can be processed individually. The two most common operations for data transformation are: - Creating multiple items from one item - Creating a single item from multiple items There are several ways to transform data for the purposes mentioned above: - Use n8n's [data transformation nodes](../../../data/expressions/#other-data-transformation-nodes). Use these nodes to modify the structure of incoming data that contain lists (arrays) without needing to use JavaScript code in the **Code node**: - Use the [**Split Out node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.splitout/) to separate a single data item containing a list into multiple items. - Use the [**Aggregate node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.aggregate/) to take separate items, or portions of them, and group them together into individual items. - Use the **Code node** to write JavaScript functions to modify the data structure of incoming data using the **Run Once for All Items** mode: - To create multiple items from a single item, you can use JavaScript code like this. This example assumes that the item has a key named `data` set to an array of items in the form of: `[{ "data": [{}, {}, ...] }]`: ``` return $input.first().json.data.map(item => { return { json: item } }); ``` - To create a single item from multiple items, you can use this JavaScript code: ``` return [ { json: { data_object: $input.all().map(item => item.json) } } ]; ``` These JavaScript examples assume your entire input is what you want to transform. As in the exercise above, you can also execute either operation on a specific field by identifying that in the items list, for example, if our workEmail example had multiple emails in a single field, we could run some code like this: ``` let items = $input.all(); return items[0].json.workEmail.map(item => { return { json: item } }); ``` ### Exercise 1. Use the **HTTP Request node** to make a GET request to the PokéAPI `https://pokeapi.co/api/v2/pokemon`. (This API requires no authentication). 1. Transform the data in the `results` field with the **Split Out node**. 1. Transform the data in the `results` field with the **Code node**. Show me the solution 1. To get the pokemon from the PokéAPI, execute the **HTTP Request node** with the following parameters: - **Authentication**: None - **Request Method**: GET - **URL**: https://pokeapi.co/api/v2/pokemon 1. To transform the data with the **Split Out node**, connect this node to the **HTTP Request node** and set the following parameters: - **Field To Split Out**: results - **Include**: No Other Fields 1. To transform the data with the **Code node**, connect this node to the **HTTP Request node** and write the following code in the JavaScript Code field: ``` let items = $input.all(); return items[0].json.results.map(item => { return { json: item } }); ``` # Processing different data types In this chapter, you will learn how to process different types of data using [n8n core nodes](../../../workflows/components/nodes/). ## HTML and XML data You're most likely familiar with HTML and XML. HTML vs. XML HTML is a markup language used to describe the structure and semantics of a web page. XML looks similar to HTML, but the tag names are different, as they describe the kind of data they hold. If you need to process HTML or XML data in your n8n workflows, use the [**HTML node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.html/) or the [**XML node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.xml/). Use the **HTML node** to extract HTML content of a webpage by referencing CSS selectors. This is useful if you want to collect structured information from a website (web-scraping). ### HTML Exercise Let's get the title of the latest n8n blog post: 1. Use the **HTTP Request node** to make a GET request to the URL `https://blog.n8n.io/` (this endpoint requires no authentication). 1. Connect an **HTML node** and configure it to extract the title of the first blog post on the page. - Hint: If you're not familiar with CSS selectors or reading HTML, the CSS selector `.post .item-title a` should help! Show me the solution 1. Configure the HTTP Request node with the following parameters: - **Authentication**: None - **Request Method**: GET - **URL**: https://blog.n8n.io/ The result should look like this: *Result of HTTP Request node* 2. Connect an **HTML node** to the **HTTP Request node** and configure the former's parameters: - **Operation**: Extract HTML Content - **Source Data**: JSON - **JSON Property**: data - **Extraction Values**: - **Key**: title - **CSS Selector**: `.post .item-title a` - **Return Value**: HTML You can add more values to extract more data. The result should look like this: *Result of HTML Extract node* Use the **XML node** to convert XML to JSON and JSON to XML. This operation is useful if you work with different web services that use either XML or JSON and need to get and submit data between them in the two formats. ### XML Exercise In the [final exercise of Chapter 1](../chapter-1/#exercise_2), you used an **HTTP Request node** to make a request to the PokéAPI. In this exercise, we'll return to that same API but we'll convert the output to XML: 1. Add an **HTTP Request node** that makes the same request to the PokéAPI at `https://pokeapi.co/api/v2/pokemon`. 1. Use the XML node to convert the JSON output to XML. Show me the solution 1. To get the pokemon from the PokéAPI, execute the **HTTP Request node** with the following parameters: - **Authentication**: None - **Request Method**: GET - **URL**: https://pokeapi.co/api/v2/pokemon 1. Connect an **XML node** to it with the following parameters: - **Mode**: JSON to XML - **Property name**: data The result should look like this: *XML node (JSON to XML) – Table View* To transform data the other way around, select the mode **XML to JSON**. ## Date, time, and interval data Date and time data types include `DATE`, `TIME`, `DATETIME`, `TIMESTAMP`, and `YEAR`. The dates and times can be passed in different formats, for example: - `DATE`: March 29 2022, 29-03-2022, 2022/03/29 - `TIME`: 08:30:00, 8:30, 20:30 - `DATETIME`: 2022/03/29 08:30:00 - `TIMESTAMP`: 1616108400 (Unix timestamp), 1616108400000 (Unix ms timestamp) - `YEAR`: 2022, 22 There are a few ways you can work with dates and times: - Use the [**Date & Time node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.datetime/) to convert date and time data to different formats and calculate dates. - Use [**Schedule Trigger node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.scheduletrigger/) to schedule workflows to run at a specific time, interval, or duration. Sometimes, you might need to pause the workflow execution. This might be necessary if you know that a service doesn't process the data instantly or it's slow to return all the results. In these cases, you don't want n8n to pass incomplete data to the next node. If you run into situations like this, use the [**Wait node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.wait/) after the node that you want to delay. The **Wait node** pauses the workflow execution and will resume execution: - At a specific time. - After a specified time interval. - On a webhook call. ### Date Exercise Build a workflow that adds five days to an input date from the Customer Datastore node that you used before. Then, if the calculated date occurred after 1959, the workflow waits 1 minute before [setting](../../../integrations/builtin/core-nodes/n8n-nodes-base.set/) the calculated date as a value. The workflow should be triggered every 30 minutes. To begin: 1. Add the **Customer Datastore (n8n training) node** with the **Get All People** action selected. Return All. 1. Add the **Date & Time node** to Round Up the created Date from the datastore to End of Month. Output this to field new-date. Include all input fields. 1. Add the **If node** to check if that new rounded date is after `1960-01-01 00:00:00`. 1. Add the **Wait node** to the True output of that node and set it to wait for one minute. 1. Add the **Edit Fields (Set) node** to set a new field called outputValue to a String containing new-date. Include all input fields. 1. Add the **Schedule Trigger node** at the beginning of the workflow to trigger it every 30 minutes. (You can keep the [Manual Trigger node](../../../integrations/builtin/core-nodes/n8n-nodes-base.manualworkflowtrigger/) for testing!) Show me the solution 1. Add the **Customer Datastore (n8n training) node** with the **Get All People** action selected. - Select the option to **Return All**. 1. Add a **Date & Time node** connected to the Customer Datastore node. Select the option to **Round a Date**. - Add the `created` date as the **Date** to round. - Select `Round Up` as the **Mode** and `End of Month` as the **To**. - Set the **Output Field Name** as `new-date`. - In **Options**, select **Add Option** and use the control to **Include Input Fields** 1. Add an **If node** connected to the **Date & Time node**. - Add the new-date field as the first part of the condition. - Set the comparison to **Date &Time > is after** - Add `1960-01-01 00:00:00` as the second part of the expression. (This should produce 3 items in the True Branch and 2 items in the False Branch) 1. Add a **Wait node** to the True output of the **If node**. - Set **Resume** to `After Time interval`. - Set **Wait Amount** to `1.00`. - Set **Wait Unit** to `Minutes`. 1. Add an **Edit Fields (Set) node** to the **Wait node**. - Use either JSON or Manual Mapping **Mode**. - Set a new field called `outputValue` to be the value of the new-date field. - Select the option to **Include Other Input Fields** and include **All** fields. 1. Add a **Schedule Trigger node** at the beginning of the workflow. - Set the **Trigger Interval** to use `Minutes`. - Set the **Minutes Between Triggers** to 30. - To test your schedule, be sure to publish the workflow. - Be sure to connect this node to the **Customer Datastore (n8n training) node** you began with! The workflow should look like this: *Workflow for transforming dates* To check the configuration of each node, you can copy the JSON code of this workflow and either paste it into the Editor UI or save it as a file and import from file into a new workflow. See [Export and import workflows](../../../workflows/export-import/) for more information. ``` { "name": "Course 2, Ch 2, Date exercise", "nodes": [ { "parameters": {}, "id": "6bf64d5c-4b00-43cf-8439-3cbf5e5f203b", "name": "When clicking \"Execute workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 620, 280 ] }, { "parameters": { "operation": "getAllPeople", "returnAll": true }, "id": "a08a8157-99ee-4d50-8fe4-b6d7e16e858e", "name": "Customer Datastore (n8n training)", "type": "n8n-nodes-base.n8nTrainingCustomerDatastore", "typeVersion": 1, "position": [ 840, 360 ] }, { "parameters": { "operation": "roundDate", "date": "={{ $json.created }}", "mode": "roundUp", "outputFieldName": "new-date", "options": { "includeInputFields": true } }, "id": "f66a4356-2584-44b6-a4e9-1e3b5de53e71", "name": "Date & Time", "type": "n8n-nodes-base.dateTime", "typeVersion": 2, "position": [ 1080, 360 ] }, { "parameters": { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict" }, "conditions": [ { "id": "7c82823a-e603-4166-8866-493f643ba354", "leftValue": "={{ $json['new-date'] }}", "rightValue": "1960-01-01T00:00:00", "operator": { "type": "dateTime", "operation": "after" } } ], "combinator": "and" }, "options": {} }, "id": "cea39877-6183-4ea0-9400-e80523636912", "name": "If", "type": "n8n-nodes-base.if", "typeVersion": 2, "position": [ 1280, 360 ] }, { "parameters": { "amount": 1, "unit": "minutes" }, "id": "5aa860b7-c73c-4df0-ad63-215850166f13", "name": "Wait", "type": "n8n-nodes-base.wait", "typeVersion": 1.1, "position": [ 1480, 260 ], "webhookId": "be78732e-787d-463e-9210-2c7e8239761e" }, { "parameters": { "assignments": { "assignments": [ { "id": "e058832a-2461-4c6d-b584-043ecc036427", "name": "outputValue", "value": "={{ $json['new-date'] }}", "type": "string" } ] }, "includeOtherFields": true, "options": {} }, "id": "be034e9e-3cf1-4264-9d15-b6760ce28f91", "name": "Edit Fields", "type": "n8n-nodes-base.set", "typeVersion": 3.3, "position": [ 1700, 260 ] }, { "parameters": { "rule": { "interval": [ { "field": "minutes", "minutesInterval": 30 } ] } }, "id": "6e8e4308-d0e0-4d0d-bc29-5131b57cf061", "name": "Schedule Trigger", "type": "n8n-nodes-base.scheduleTrigger", "typeVersion": 1.1, "position": [ 620, 480 ] } ], "pinData": {}, "connections": { "When clicking \"Execute workflow\"": { "main": [ [ { "node": "Customer Datastore (n8n training)", "type": "main", "index": 0 } ] ] }, "Customer Datastore (n8n training)": { "main": [ [ { "node": "Date & Time", "type": "main", "index": 0 } ] ] }, "Date & Time": { "main": [ [ { "node": "If", "type": "main", "index": 0 } ] ] }, "If": { "main": [ [ { "node": "Wait", "type": "main", "index": 0 } ] ] }, "Wait": { "main": [ [ { "node": "Edit Fields", "type": "main", "index": 0 } ] ] }, "Schedule Trigger": { "main": [ [ { "node": "Customer Datastore (n8n training)", "type": "main", "index": 0 } ] ] } } } ``` ## Binary data Up to now, you have mainly worked with text data. But what if you want to process data that's not text, like images or PDF files? These types of files are represented in the binary numeral system, so they're considered binary data. In this form, binary data doesn't offer you useful information, so you'll need to convert it into a readable form. In n8n, you can process binary data with the following nodes: - [HTTP Request](../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) to request and send files from/to web resources and APIs. - [Read/Write Files from Disk](../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/) to read and write files from/to the machine where n8n is running. - [Convert to File](../../../integrations/builtin/core-nodes/n8n-nodes-base.converttofile/) to take input data and output it as a file. - [Extract From File](../../../integrations/builtin/core-nodes/n8n-nodes-base.extractfromfile/) to get data from a binary format and convert it to JSON. Reading and writing files is only available on self-hosted n8n Reading and writing files to disk isn't available on n8n Cloud. You'll read and write to the machine where you installed n8n. If you run n8n in Docker, your command runs in the n8n container and not the Docker host. The Read/Write Files From Disk node looks for files relative to the n8n install path. n8n recommends using absolute file paths to prevent any errors. To read or write a binary file, you need to write the path (location) of the file in the node's `File(s) Selector` parameter (for the Read operation) or in the node's `File Path and Name` parameter (for the Write operation). Naming the right path The file path looks slightly different depending on how you are running n8n: - npm: `~/my_file.json` - n8n cloud / Docker: `/tmp/my_file.json` ### Binary Exercise 1 For our first binary exercise, let's convert a PDF file to JSON: 1. Make an HTTP request to get this PDF file: `https://media.kaspersky.com/pdf/Kaspersky_Lab_Whitepaper_Anti_blocker.pdf.` 1. Use the **Extract From File node** to convert the file from binary to JSON. Show me the solution In the **HTTP Request node**, you should see the PDF file, like this: *HTTP Request node to get PDF* When you convert the PDF from binary to JSON using the **Extract From File node**, the result should look like this: *Extract From File node* To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "name": "Binary to JSON", "nodes": [ { "parameters": {}, "id": "78639a25-b69a-4b9c-84e0-69e045bed1a3", "name": "When clicking \"Execute Workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 480, 520 ] }, { "parameters": { "url": "https://media.kaspersky.com/pdf/Kaspersky_Lab_Whitepaper_Anti_blocker.pdf", "options": {} }, "id": "a11310df-1287-4e9a-b993-baa6bd4265a6", "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 700, 520 ] }, { "parameters": { "operation": "pdf", "options": {} }, "id": "88697b6b-fb02-4c3d-a715-750d60413e9f", "name": "Extract From File", "type": "n8n-nodes-base.extractFromFile", "typeVersion": 1, "position": [ 920, 520 ] } ], "pinData": {}, "connections": { "When clicking \"Execute Workflow\"": { "main": [ [ { "node": "HTTP Request", "type": "main", "index": 0 } ] ] }, "HTTP Request": { "main": [ [ { "node": "Extract From File", "type": "main", "index": 0 } ] ] } } } ``` ### Binary Exercise 2 For our second binary exercise, let's convert some JSON data to binary: 1. Make an HTTP request to the Poetry DB API `https://poetrydb.org/random/1`. 1. Convert the returned data from JSON to binary using the **Convert to File node**. 1. Write the new binary file data to the machine where n8n is running using the **Read/Write Files From Disk node**. 1. To check that it worked out, use the **Read/Write Files From Disk node** to read the generated binary file. Show me the solution The workflow for this exercise looks like this: *Workflow for moving JSON to binary data* To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "name": "JSON to file and Read-Write", "nodes": [ { "parameters": {}, "id": "78639a25-b69a-4b9c-84e0-69e045bed1a3", "name": "When clicking \"Execute Workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 480, 520 ] }, { "parameters": { "url": "https://poetrydb.org/random/1", "options": {} }, "id": "a11310df-1287-4e9a-b993-baa6bd4265a6", "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 680, 520 ] }, { "parameters": { "operation": "toJson", "options": {} }, "id": "06be18f6-f193-48e2-a8d9-35f4779d8324", "name": "Convert to File", "type": "n8n-nodes-base.convertToFile", "typeVersion": 1, "position": [ 880, 520 ] }, { "parameters": { "operation": "write", "fileName": "/tmp/poetrydb.json", "options": {} }, "id": "f2048e5d-fa8f-4708-b15a-d07de359f2e5", "name": "Read/Write Files from Disk", "type": "n8n-nodes-base.readWriteFile", "typeVersion": 1, "position": [ 1080, 520 ] }, { "parameters": { "fileSelector": "={{ $json.fileName }}", "options": {} }, "id": "d630906c-09d4-49f4-ba14-416c0f4de1c8", "name": "Read/Write Files from Disk1", "type": "n8n-nodes-base.readWriteFile", "typeVersion": 1, "position": [ 1280, 520 ] } ], "pinData": {}, "connections": { "When clicking \"Execute Workflow\"": { "main": [ [ { "node": "HTTP Request", "type": "main", "index": 0 } ] ] }, "HTTP Request": { "main": [ [ { "node": "Convert to File", "type": "main", "index": 0 } ] ] }, "Convert to File": { "main": [ [ { "node": "Read/Write Files from Disk", "type": "main", "index": 0 } ] ] }, "Read/Write Files from Disk": { "main": [ [ { "node": "Read/Write Files from Disk1", "type": "main", "index": 0 } ] ] } } } ``` # Merging and splitting data In this chapter, you will learn how to merge and split data, and in what cases it might be useful to perform these operations. ## Merging data In some cases, you might need to merge (combine) and process data from different sources. Merging data can involve: - Creating one data set from multiple sources. - Synchronizing data between multiple systems. This could include removing duplicate data or updating data in one system when it changes in another. One-way vs. two-way sync In a one-way sync, data is synchronized in one direction. One system serves as the single source of truth. When information changes in that main system, it automatically changes in the secondary system; but if information changes in the secondary system, the changes aren't reflected in the main system. In a two-way sync, data is synchronized in both directions (between both systems). When information changes in either of the two systems, it automatically changes in the other one as well. [This blog tutorial](https://blog.n8n.io/how-to-sync-data-between-two-systems/) explains how to sync data one-way and two-way between two CRMs. In n8n, you can merge data from two different nodes using the [Merge node](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/), which provides several merging options: - [Append](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#append) - [Combine](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#combine) - [Merge by Fields](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#combine-by-matching-fields): requires input fields to match on - [Merge by Position](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#combine-by-position) - [Combine all possible combinations](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#combine-by-all-possible-combinations) - [Choose Branch](../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/#choose-branch) Notice that Combine > Merge by Fields requires you enter input fields to match on. These fields should contain identical values between the data sources so n8n can properly match data together. In the **Merge node**, they're called `Input 1 Field` and `Input 2 Field`. *Property Input fields in the Merge node* Property Input in dot notation If you want to reference nested values in the **Merge node** parameters `Input 1 Field` and `Input 2 Field`, you need to enter the property key in dot-notation format (as text, not as an expression). Note You can also find the **Merge node** under the alias Join. This might be more intuitive if you're familiar with SQL joins. ### Merge Exercise Build a workflow that merges data from the Customer Datastore node and Code node. 1. Add a **Merge node** that takes `Input 1` from a **Customer Datastore node** and `Input 2` from a **Code node**. 1. In the **Customer Datastore node**, run the operation **Get All People**. 1. In the **Code node**, create an array of two objects with three properties: `name`, `language`, and `country`, where the property `country` has two sub-properties `code` and `name`. - Fill out the values of these properties with the information of two characters from the Customer Database. - For example, Jay Gatsby's language is English and country name is United States. 1. In the **Merge node**, try out different merge options. Show me the solution The workflow for this exercise looks like this: *Workflow exercise for merging data* If you merge data with the option **Keep Matches** using the name as the input fields to match, the result should look like this (note this example only contains Jay Gatsby; yours might look different depending on which characters you selected): *Output of Merge node with option to keep matches* To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "meta": { "templateCredsSetupCompleted": true, "instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7" }, "nodes": [ { "parameters": { "mode": "combine", "mergeByFields": { "values": [ { "field1": "name", "field2": "name" } ] }, "options": {} }, "id": "578365f3-26dd-4fa6-9858-f0a5fdfc413b", "name": "Merge", "type": "n8n-nodes-base.merge", "typeVersion": 2.1, "position": [ 720, 580 ] }, { "parameters": {}, "id": "71aa5aad-afdf-4f8a-bca0-34450eee8acc", "name": "When clicking \"Execute workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 260, 560 ] }, { "parameters": { "operation": "getAllPeople" }, "id": "497174fe-3cab-4160-8103-78b44efd038d", "name": "Customer Datastore (n8n training)", "type": "n8n-nodes-base.n8nTrainingCustomerDatastore", "typeVersion": 1, "position": [ 500, 460 ] }, { "parameters": { "jsCode": "return [\n {\n 'name': 'Jay Gatsby',\n 'language': 'English',\n 'country': {\n 'code': 'US',\n 'name': 'United States'\n }\n \n }\n \n];" }, "id": "387e8a1e-e796-4f05-8e75-7ce25c786c5f", "name": "Code", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 500, 720 ] } ], "connections": { "When clicking \"Execute workflow\"": { "main": [ [ { "node": "Customer Datastore (n8n training)", "type": "main", "index": 0 }, { "node": "Code", "type": "main", "index": 0 } ] ] }, "Customer Datastore (n8n training)": { "main": [ [ { "node": "Merge", "type": "main", "index": 0 } ] ] }, "Code": { "main": [ [ { "node": "Merge", "type": "main", "index": 1 } ] ] } }, "pinData": {} } ``` ## Looping In some cases, you might need to perform the same operation on each element of an array or each data item (for example sending a message to every contact in your address book). In technical terms, you need to iterate through the data (with loops). n8n generally handles this repetitive processing automatically, as the nodes run once for each item, so you don't need to build loops into your workflows. However, there are some [exceptions of nodes and operations](../../../flow-logic/looping/#node-exceptions) that will require you to build a loop into your workflow. To [create a loop in an n8n workflow](../../../flow-logic/looping/#using-loops-in-n8n), you need to connect the output of one node to the input of a previous node, and add an **If node** to check when to stop the loop. ## Splitting data in batches If you need to process large volumes of incoming data, execute the **Code node** multiple times, or avoid API rate limits, it's best to split the data into batches (groups) and process these batches. For these processes, use the [**Loop Over Items node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/). This node splits input data into a specified batch size and, with each iteration, returns a predefined amount of data. Execution of Loop Over Items node The **Loop Over Items node** stops executing after all the incoming items get divided into batches and passed on to the next node in the workflow, so it's not necessary to add an **If node** to stop the loop. ### Loop/Batch Exercise Build a workflow that reads the RSS feed from Medium and dev.to. The workflow should consist of three nodes: 1. A **Code node** that returns the URLs of the RSS feeds of Medium (`https://medium.com/feed/n8n-io`) and dev.to (`https://dev.to/feed/n8n`). 1. A **Loop Over Items node** with `Batch Size: 1`, that takes in the inputs from the **Code node** and **RSS Read node** and iterates over the items. 1. An **RSS Read node** that gets the URL of the Medium RSS feed, passed as an expression: `{{ $json.url }}`. - The **RSS Read node** is one of the [exception nodes](../../../flow-logic/looping/#node-exceptions) which processes only the first item it receives, so the **Loop Over Items node** is necessary for iterating over multiple items. Show me the solution 1. Add a **Code Node**. You can format the code in several ways, one way is: - Set **Mode** to `Run Once for All Items`. - Set **Language** to `JavaScript`. - Copy the code below and paste it into the JavaScript Code editor: ``` let urls = [ { json: { url: 'https://medium.com/feed/n8n-io' } }, { json: { url: 'https://dev.to/feed/n8n' } } ] return urls; ``` 1. Add a **Loop Over Items node** connected to the **Code node**. - Set **Batch Size** to `1`. 1. The **Loop Over Items node** automatically adds a node called "Replace Me". Replace that node with an **RSS Read node**. - Set the **URL** to use the url from the Code Node: `{{ $json.url }}`. The workflow for this exercise looks like this: *Workflow for getting RSS feeds from two blogs* To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "meta": { "templateCredsSetupCompleted": true, "instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7" }, "nodes": [ { "parameters": {}, "id": "ed8dc090-ae8c-4db6-a93b-0fa873015c25", "name": "When clicking \"Execute workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 460, 460 ] }, { "parameters": { "jsCode": "let urls = [\n {\n json: {\n url: 'https://medium.com/feed/n8n-io'\n }\n },\n {\n json: {\n url: 'https://dev.to/feed/n8n'\n } \n }\n]\n\nreturn urls;" }, "id": "1df2a9bf-f970-4e04-b906-92dbbc9e8d3a", "name": "Code", "type": "n8n-nodes-base.code", "typeVersion": 2, "position": [ 680, 460 ] }, { "parameters": { "options": {} }, "id": "3cce249a-0eab-42e2-90e3-dbdf3684e012", "name": "Loop Over Items", "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 900, 460 ] }, { "parameters": { "url": "={{ $json.url }}", "options": {} }, "id": "50e1c1dc-9a5d-42d3-b7c0-accc31636aa6", "name": "RSS Read", "type": "n8n-nodes-base.rssFeedRead", "typeVersion": 1, "position": [ 1120, 460 ] } ], "connections": { "When clicking \"Execute workflow\"": { "main": [ [ { "node": "Code", "type": "main", "index": 0 } ] ] }, "Code": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ null, [ { "node": "RSS Read", "type": "main", "index": 0 } ] ] }, "RSS Read": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] } }, "pinData": {} } ``` # Dealing with errors in workflows Sometimes you build a nice workflow, but it fails when you try to execute it. Workflow executions may fail for a variety of reasons, ranging from straightforward problems with incorrectly configuring a node or a failure in a third-party service to more mysterious errors. But don't panic. In this lesson, you'll learn how you can troubleshoot errors so you can get your workflow up and running as soon as possible. ## Checking failed workflows n8n tracks executions of your workflows. When one of your workflows fails, you can check the Executions log to see what went wrong. The Executions log shows you a list of the latest execution time, status, mode, and running time of your saved workflows. Open the Executions log by selecting [**Executions**](../../../workflows/executions/#execution-modes) in the left-side panel. To investigate a specific failed execution from the list, select the name or the **View** button that appears when you hover over the row of the respective execution. *Executions log* This will open the workflow in read-only mode, where you can see the execution of each node. This representation can help you identify at what point the workflow ran into issues. To toggle between viewing the execution and the editor, select the **Editor | Executions** button at the top of the page. *Workflow execution view* ## Catching erroring workflows To catch failed workflows, create a separate [**Error Workflow**](../../../flow-logic/error-handling/) with the [**Error Trigger node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.errortrigger/). This workflow will only execute if the main workflow execution fails. Use additional nodes in your **Error Workflow** that make sense, like sending notifications about the failed workflow and its errors using email or Slack. To receive error messages for a failed workflow, set the **Error Workflow** in the [Workflow Settings](../../../workflows/settings/) to an Error Workflow that uses an **Error Trigger node**. The only difference between a regular workflow and an Error Workflow is that the latter contains an **Error Trigger node**. Make sure to create this node before you set this as another workflow's designated Error Workflow. Error workflows - If a workflow uses the Error Trigger node, you don't have to publish the workflow. - If a workflow contains the Error Trigger node, by default, the workflow uses itself as the error workflow. - You can't test error workflows when running workflows manually. The Error trigger only runs when an automatic workflow errors. - You can set the same Error Workflow for multiple workflows. ### Exercise In the previous chapters, you've built several small workflows. Now, pick one of them that you want to monitor and create an Error Workflow for it: 1. Create a new Error Workflow. 1. Add the **Error Trigger node**. 1. Connect a node for the communication platform of your choice to the Error Trigger node, like [Slack](../../../integrations/builtin/app-nodes/n8n-nodes-base.slack/), [Discord](../../../integrations/builtin/app-nodes/n8n-nodes-base.discord/), [Telegram](../../../integrations/builtin/app-nodes/n8n-nodes-base.telegram/), or even [Gmail](../../../integrations/builtin/app-nodes/n8n-nodes-base.gmail/) or a more generic [Send Email](../../../integrations/builtin/core-nodes/n8n-nodes-base.sendemail/). 1. In the workflow you want to monitor, open the [Workflow Settings](../../../workflows/settings/) and select the new Error Workflow you just created. Note that this workflow needs to run automatically to trigger the error workflow. Show me the solution The workflow for this exercise looks like this: *Error workflow* To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "nodes": [ { "parameters": {}, "name": "Error Trigger", "type": "n8n-nodes-base.errorTrigger", "typeVersion": 1, "position": [ 720, -380 ] }, { "parameters": { "channel": "channelname", "text": "=This workflow {{$(\"Error Trigger\").item.json[\"workflow\"][\"name\"]}}failed.\nHave a look at it here: {{$(\"Error Trigger\").item.json[\"execution\"][\"url\"]}}", "attachments": [], "otherOptions": {} }, "name": "Slack", "type": "n8n-nodes-base.slack", "position": [ 900, -380 ], "typeVersion": 1, "credentials": { "slackApi": { "id": "17", "name": "slack_credentials" } } } ], "connections": { "Error Trigger": { "main": [ [ { "node": "Slack", "type": "main", "index": 0 } ] ] } } } ``` ## Throwing exceptions in workflows Another way of troubleshooting workflows is to include a [**Stop and Error node**](../../../integrations/builtin/core-nodes/n8n-nodes-base.stopanderror/) in your workflow. This node throws an error. You can specify the error type: - **Error Message**: returns a custom message about the error - **Error Object**: returns the type of error You can only use the **Stop and Error node** as the last node in a workflow. When to throw errors Throwing exceptions with the **Stop and Error node** is useful for verifying the data (or assumptions about the data) from a node and returning custom error messages. If you are working with data from a third-party service, you may come across problems such as: - Wrongly formatted JSON output - Data with the wrong type (for example, numeric data that has a non-numeric value) - Missing values - Errors from remote servers Though this kind of invalid data might not cause the workflow to fail right away, it could cause problems later on, and then it can become difficult to track the source error. This is why it's better to throw an error at the time you know there might be a problem. *Stop and Error node with error message* # Test your knowledge Congratulations, you finished the n8n Course Level 2! You've learned a lot about workflow automation and built quite a complex business workflow. Why not showcase your skills? You can test your knowledge by taking a **quiz**, which consists of questions about the theoretical concepts and workflows covered in this course. - You need to have at least 80% correct answers to pass the quiz. - You can take the quiz as many times as you want. - There's no time limit on answering the quiz questions. [Take the quiz!](https://n8n-community.typeform.com/to/r9hDbytg) ## What's next? - Create new workflows for your work or personal use and share them with us. Don't have any ideas? Find inspiration on the [workflows page](https://n8n.io/workflows) and on our [blog](https://n8n.io/blog/). - Dive deeper into n8n's features by reading the [docs](../../../). # Automating a business workflow Remember [our friend Nathan](../../../level-one/chapter-3/)? **Nathan 🙋:** Hello, it's me again. My manager was so impressed with my first workflow automation solution that she entrusted me with more responsibility.\ **You 👩‍🔧:** More work and responsibility. Congratulations, I guess. What do you need to do now?\ **Nathan 🙋:** I got access to all our sales data and I'm now responsible for creating two reports: one for regional sales and one for orders prices. They're based on data from different sources and come in different formats.\ **You 👩‍🔧:** Sounds like a lot of manual work, but the kind that can be automated. Let's do it! ## Workflow design Now that we know what Nathan wants to automate, let's list the steps he needs to take to achieve this: 1. Get and combine data from all necessary sources. 1. Sort the data and format the dates. 1. Write binary files. 1. Send notifications using email and Discord. n8n provides [core nodes](../../../../integrations/builtin/node-types/#core-nodes) for all these steps. This use case is somewhat complex. We should build it from three separate workflows: 1. A workflow that merges the company data with external information. 1. A workflow that generates the reports. 1. A workflow that monitors errors in the second workflow. ## Workflow prerequisites To build the workflows, you will need the following: - An [Airtable](https://airtable.com/) account and [credentials](../../../../integrations/builtin/credentials/airtable/). - A [Google](https://www.google.com/account/about/) account and [credentials](../../../../integrations/builtin/credentials/google/) to access Gmail. - A [Discord](https://discord.com/) account and webhook URL (you receive this using email when you sign up for this course). Next, you will build these three workflows with step-by-step instructions. # Workflow 1: Merging data Nathan's company stores its customer data in Airtable. This data contains information about the customers' ID, country, email, and join date, but lacks data about their respective region and subregion. You need to fill in these last two fields in order to create the reports for regional sales. To accomplish this task, you first need to make a copy of this table in your Airtable account. [Click here to access.](https://airtable.com/embed/shrNX9tjPkVLABbNz?backgroundColor=orange&viewControls=on) When setting up your Airtable, ensure that the `customerSince` column is configured as a Date type field with the **Include time** option enabled. Without this setting, you may encounter errors in step 4 when updating the table. Next, build a small workflow that merges data from Airtable and a REST Countries API: 1. Use the [**Airtable node**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.airtable/) to list the data in the Airtable table named `customers`. 1. Use the [**HTTP Request node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) to get data from the REST Countries API: `https://restcountries.com/v3.1/all`, and send the query parameter name `fields` with the value `name,region,subregion`. This will return data about world countries, split out into separate items. 1. Use the [**Merge node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/) to merge data from Airtable and the Countries API by country name, represented as `customerCountry` in Airtable and `name.common` in the Countries API, respectively. 1. Use another Airtable node to update the fields `region` and `subregion` in Airtable with the data from the Countries API. The workflow should look like this: *Workflow 1 for merging data from Airtable and the Countries API* Quiz questions - How many items does the **HTTP Request node** return? - How many items does the **Merge node** return? - How many unique regions are assigned in the customers table? - What's the subregion assigned to the customerID 10? # Workflow 2: Generating reports In this workflow, you will merge data from different sources, transform binary data, generate files, and send notifications about them. The final workflow should look like this: *Workflow 2 for aggregating data and generating files* To make things easier, let's split the workflow into three parts. ## Part 1: Getting data from different sources The first part of the workflow consists of five nodes: *Workflow 1: Getting data from different sources* 1. Use the [**HTTP Request node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/) to get data from the API endpoint that stores company data. Configure the following node parameters: - **Method**: Get - **URL**: The **Dataset URL** you received in the email when you signed up for this course. - **Authentication**: Generic Credential Type - **Generic Auth Type**: Header Auth - **Credentials for Header Auth**: The Header Auth name and Header Auth value you received in the email when you signed up for this course. - **Send Headers**: Toggle to true - **Specify Headers**: Select `Using Fields Below` - **Name**: `unique_id` - **Value**: The unique ID you received in the email when you signed up for this course. 1. Use the [**Airtable node**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.airtable/) to list data from the `customers` table (where you updated the fields `region` and `subregion`). 1. Use the [**Merge node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.merge/) to merge data from the Airtable and HTTP Request node, based on matching the input fields for `customerID`. 1. Use the [**Sort node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.sort/) to sort data by `orderPrice` in descending order. Quiz questions - What's the name of the employee assigned to customer 1? - What's the order status of customer 2? - What's the highest order price? ## Part 2: Generating file for regional sales The second part of the workflow consists of four nodes: *Workflow 2: Generating file for regional sales* 1. Use the [**If node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.if/) to filter to only display orders from the region `Americas`. 1. Use the [**Convert to File**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.converttofile/) to transform the incoming data from JSON to binary format. Convert each item to a separate file. (Bonus points if you can figure out how to name each report based on the orderID!) 1. Use the [**Gmail node**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.gmail/) (or another email node) to send the files using email to an address you have access to. Note that you need to add an attachment with the data property. 1. Use the [**Discord node**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.discord/) to send a message in the n8n Discord channel `#course-level-two`. In the node, configure the following parameters: - **Webhook URL**: The Discord URL you received in the email when you signed up for this course. - **Text**: "I sent the file using email with the label ID `{label ID}`. My ID: " followed by the unique ID emailed to you when you registered for this course.\ Note that you need to replace the text in curly braces `{}` with [expressions](../../../../glossary/#expression-n8n) that reference the data from the nodes. Quiz questions - How many orders are assigned to the `Americas` region? - What's the total price of the orders in the `Americas` region? - How many items does the **Write Binary File node** return? ## Part 3: Generating files for total sales The third part of the workflow consists of five nodes: *Workflow 3: Generating files for total sales* 1. Use the [**Loop Over Items node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/) to split data from the Item Lists node into batches of 5. 1. Use the [**Set node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.set/) to set four values, referenced with expressions from the previous node: `customerEmail`, `customerRegion`, `customerSince`, and `orderPrice`. 1. Use the [**Date & Time node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.datetime/) to change the date format of the field `customerSince` to the format MM/DD/YYYY. - Set the **Include Input Fields** option to keep all the data together. 1. Use the [**Convert to File node**](../../../../integrations/builtin/core-nodes/n8n-nodes-base.converttofile/) to create a CSV spreadsheet with the file name set as the expression: `{{$runIndex > 0 ? 'file_low_orders':'file_high_orders'}}`. 1. Use the [**Discord node**](../../../../integrations/builtin/app-nodes/n8n-nodes-base.discord/) to send a message in the n8n Discord channel `#course-level-two`. In the node, configure the following parameters: - **Webhook URL**: The Discord URL you received in the email when you signed up for this course. - **Text**: "I created the spreadsheet `{file name}`. My ID:" followed by the unique ID emailed to you when you registered for this course.\ Note that you need to replace `{file name}` with an expression that references data from the previous **Convert to File node**. Quiz questions - What's the lowest order price in the first batch of items? - What's the formatted date of customer 7? - How many items does the **Convert to File node** return? Show me the solution To check the configuration of the nodes, you can copy the JSON workflow code below and paste it into your Editor UI: ``` { "meta": { "templateCredsSetupCompleted": true, "instanceId": "cb484ba7b742928a2048bf8829668bed5b5ad9787579adea888f05980292a4a7" }, "nodes": [ { "parameters": { "sendTo": "bart@n8n.io", "subject": "Your TPS Reports", "emailType": "text", "message": "Please find your TPS report attached.", "options": { "attachmentsUi": { "attachmentsBinary": [ {} ] } } }, "id": "d889eb42-8b34-4718-b961-38c8e7839ea6", "name": "Gmail", "type": "n8n-nodes-base.gmail", "typeVersion": 2.1, "position": [ 2100, 500 ], "credentials": { "gmailOAuth2": { "id": "HFesCcFcn1NW81yu", "name": "Gmail account 7" } } }, { "parameters": {}, "id": "c0236456-40be-4f8f-a730-e56cb62b7b5c", "name": "When clicking \"Execute workflow\"", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 780, 600 ] }, { "parameters": { "url": "https://internal.users.n8n.cloud/webhook/level2-erp", "authentication": "genericCredentialType", "genericAuthType": "httpHeaderAuth", "sendHeaders": true, "headerParameters": { "parameters": [ { "name": "unique_id", "value": "recFIcD6UlSyxaVMQ" } ] }, "options": {} }, "id": "cc106fa0-6630-4c84-aea4-a4c7a3c149e9", "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "typeVersion": 4.1, "position": [ 1000, 500 ], "credentials": { "httpHeaderAuth": { "id": "qeHdJdqqqaTC69cm", "name": "Course L2 Credentials" } } }, { "parameters": { "operation": "search", "base": { "__rl": true, "value": "apprtKkVasbQDbFa1", "mode": "list", "cachedResultName": "All your base", "cachedResultUrl": "https://airtable.com/apprtKkVasbQDbFa1" }, "table": { "__rl": true, "value": "tblInZ7jeNdlUOvxZ", "mode": "list", "cachedResultName": "Course L2, Workflow 1", "cachedResultUrl": "https://airtable.com/apprtKkVasbQDbFa1/tblInZ7jeNdlUOvxZ" }, "options": {} }, "id": "e5ae1927-b531-401c-9cb2-ecf1f2836ba6", "name": "Airtable", "type": "n8n-nodes-base.airtable", "typeVersion": 2, "position": [ 1000, 700 ], "credentials": { "airtableTokenApi": { "id": "MIplo6lY3AEsdf7L", "name": "Airtable Personal Access Token account 4" } } }, { "parameters": { "mode": "combine", "mergeByFields": { "values": [ { "field1": "customerID", "field2": "customerID" } ] }, "options": {} }, "id": "1cddc984-7fca-45e0-83b8-0c502cb4c78c", "name": "Merge", "type": "n8n-nodes-base.merge", "typeVersion": 2.1, "position": [ 1220, 600 ] }, { "parameters": { "sortFieldsUi": { "sortField": [ { "fieldName": "orderPrice", "order": "descending" } ] }, "options": {} }, "id": "2f55af2e-f69b-4f61-a9e5-c7eefaad93ba", "name": "Sort", "type": "n8n-nodes-base.sort", "typeVersion": 1, "position": [ 1440, 600 ] }, { "parameters": { "conditions": { "options": { "caseSensitive": true, "leftValue": "", "typeValidation": "strict" }, "conditions": [ { "id": "d3afe65c-7c80-4caa-9d1c-33c62fbc2197", "leftValue": "={{ $json.region }}", "rightValue": "Americas", "operator": { "type": "string", "operation": "equals", "name": "filter.operator.equals" } } ], "combinator": "and" }, "options": {} }, "id": "2ed874a9-5bcf-4cc9-9b52-ea503a562892", "name": "If", "type": "n8n-nodes-base.if", "typeVersion": 2, "position": [ 1660, 500 ] }, { "parameters": { "operation": "toJson", "mode": "each", "options": { "fileName": "=report_orderID_{{ $('If').item.json.orderID }}.json" } }, "id": "d93b4429-2200-4a84-8505-16266fedfccd", "name": "Convert to File", "type": "n8n-nodes-base.convertToFile", "typeVersion": 1.1, "position": [ 1880, 500 ] }, { "parameters": { "authentication": "webhook", "content": "I sent the file using email with the label ID and wrote the binary file {file name}. My ID: 123", "options": {} }, "id": "26f43f2c-1422-40de-9f40-dd2d80926b1c", "name": "Discord", "type": "n8n-nodes-base.discord", "typeVersion": 2, "position": [ 2320, 500 ], "credentials": { "discordWebhookApi": { "id": "WEBrtPdoLrhlDYKr", "name": "L2 Course Discord Webhook account" } } }, { "parameters": { "batchSize": 5, "options": {} }, "id": "0fa1fbf6-fe77-4044-a445-c49a1db37dec", "name": "Loop Over Items", "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 1660, 700 ] }, { "parameters": { "assignments": { "assignments": [ { "id": "ce839b80-c50d-48f5-9a24-bb2df6fdd2ff", "name": "customerEmail", "value": "={{ $json.customerEmail }}", "type": "string" }, { "id": "0c613366-3808-45a2-89cc-b34c7b9f3fb7", "name": "region", "value": "={{ $json.region }}", "type": "string" }, { "id": "0f19a88c-deb0-4119-8965-06ed62a840b2", "name": "customerSince", "value": "={{ $json.customerSince }}", "type": "string" }, { "id": "a7e890d6-86af-4839-b5df-d2a4efe923f7", "name": "orderPrice", "value": "={{ $json.orderPrice }}", "type": "number" } ] }, "options": {} }, "id": "09b8584c-4ead-4007-a6cd-edaa4669a757", "name": "Edit Fields", "type": "n8n-nodes-base.set", "typeVersion": 3.3, "position": [ 1880, 700 ] }, { "parameters": { "operation": "formatDate", "date": "={{ $json.customerSince }}", "options": { "includeInputFields": true } }, "id": "c96fae90-e080-48dd-9bff-3e4506aafb86", "name": "Date & Time", "type": "n8n-nodes-base.dateTime", "typeVersion": 2, "position": [ 2100, 700 ] }, { "parameters": { "options": { "fileName": "={{$runIndex > 0 ? 'file_low_orders':'file_high_orders'}}" } }, "id": "43dc8634-2f16-442b-a754-89f47c51c591", "name": "Convert to File1", "type": "n8n-nodes-base.convertToFile", "typeVersion": 1.1, "position": [ 2320, 700 ] }, { "parameters": { "authentication": "webhook", "content": "I created the spreadsheet {file name}. My ID: 123", "options": {} }, "id": "05da1c22-d1f6-4ea6-9102-f74f9ae2e9d3", "name": "Discord1", "type": "n8n-nodes-base.discord", "typeVersion": 2, "position": [ 2540, 700 ], "credentials": { "discordWebhookApi": { "id": "WEBrtPdoLrhlDYKr", "name": "L2 Course Discord Webhook account" } } } ], "connections": { "Gmail": { "main": [ [ { "node": "Discord", "type": "main", "index": 0 } ] ] }, "When clicking \"Execute workflow\"": { "main": [ [ { "node": "HTTP Request", "type": "main", "index": 0 }, { "node": "Airtable", "type": "main", "index": 0 } ] ] }, "HTTP Request": { "main": [ [ { "node": "Merge", "type": "main", "index": 0 } ] ] }, "Airtable": { "main": [ [ { "node": "Merge", "type": "main", "index": 1 } ] ] }, "Merge": { "main": [ [ { "node": "Sort", "type": "main", "index": 0 } ] ] }, "Sort": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 }, { "node": "If", "type": "main", "index": 0 } ] ] }, "If": { "main": [ [ { "node": "Convert to File", "type": "main", "index": 0 } ] ] }, "Convert to File": { "main": [ [ { "node": "Gmail", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ null, [ { "node": "Edit Fields", "type": "main", "index": 0 } ] ] }, "Edit Fields": { "main": [ [ { "node": "Date & Time", "type": "main", "index": 0 } ] ] }, "Date & Time": { "main": [ [ { "node": "Convert to File1", "type": "main", "index": 0 } ] ] }, "Convert to File1": { "main": [ [ { "node": "Discord1", "type": "main", "index": 0 } ] ] }, "Discord1": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] } }, "pinData": {} } ``` # Workflow 3: Monitoring workflow errors Last but not least, let's help Nathan know if there are any errors running the workflow. To accomplish this task, create an Error workflow that monitors the main workflow: 1. Create a new workflow. 1. Add an **Error Trigger node** (and execute it as a test). 1. Connect a **Discord node** to the **Error Trigger node** and configure these fields: - **Webhook URL**: The Discord URL that you received in the email from n8n when you signed up for this course. - **Text**: "The workflow `{workflow name}` failed, with the error message: `{execution error message}`. Last node executed: `{name of the last executed node}`. Check this workflow execution here: `{execution URL}` My Unique ID: " followed by the unique ID emailed to you when you registered for this course. Note that you need to replace the text in curly brackets `{}` with expressions that take the respective information from the Error Trigger node. 1. Execute the Discord node. 1. Set the newly created workflow as the **Error Workflow** for the main workflow you created in the previous lesson. The workflow should look like this: *Workflow 3 for monitoring workflow errors* Quiz questions - What fields does the **Error Trigger node** return? - What information about the execution does the **Error Trigger node** return? - What information about the workflow does the **Error Trigger node** return? - What's the expression to reference the workflow name? # Credentials [Credentials](../glossary/#credential-n8n) are private pieces of information issued by apps and services to authenticate you as a user and allow you to connect and share information between the app or service and the n8n node. Access the credentials UI by opening the left menu and selecting **Credentials**. n8n lists credentials you created on the **My credentials** tab. The **All credentials** tab shows all credentials you can use, included credentials shared with you by other users. - [Create and edit credentials](add-edit-credentials/). - Learn about [credential sharing](credential-sharing/). - Find information on setting up credentials for your services in the [credentials library](../integrations/builtin/credentials/). # Create and edit credentials Credentials are securely stored authentication information used to connect n8n workflows to external services such as APIs, or databases. ## Create a credential 1. Select the  **Create** button in the upper-left corner of the side menu. Select credential. 1. If your n8n instance supports [projects](../../glossary/#project-n8n), you'll also need to choose whether to create the credential inside your personal space or a specific project you have access to. If you're using the community version, you'll create the credential inside your personal space. 1. Select the app or service you wish to connect to. Or: 1. Using the **Create** button in the upper-right corner from either the **Overview** page or a specific project. Select Credential. 1. If you're doing this from the **Overview** page, you'll create the credential inside your personal space. If you're doing this from inside a project, you'll create the credential inside that specific project. 1. Select the app or service you wish to connect to. You can also create new credential in the credential drop down when editing a node on the workflow editor. Once in the credential modal, enter the details required by your service. Refer to your service's page in the [credentials library](../../integrations/builtin/credentials/) for guidance. When you save a credential, n8n tests it to confirm it works. Credentials naming n8n names new credentials "*node name* account" by default. You can rename the credentials by clicking on the name, similarly to renaming nodes. It's good practice to give them names that identify the app or service, type, and purpose of the credential. A naming convention makes it easier to keep track of and identify your credentials. ## Allowed HTTP request domains The **Allowed HTTP Request Domains** field appears on many n8n credentials for web-based APIs and services. It controls which domains the credential is permitted to be used against when the credential is selected in an **HTTP Request** node. It has no effect when the credential is used in its own dedicated node. The field has three options: - **All**: The credential can be used against any URL. - **Specific Domains**: Restrict to specific domains (provide a comma-separated list like `httpbin.org, api.github.com`) - **None**: The credential is blocked entirely from use in the **HTTP Request** node. This field prevents credential misuse, for example sending the credential to URLs outside the intended domain. ## Expressions in credentials You can use [expressions](../../glossary/#expression-n8n) to set credentials dynamically as your workflow runs: 1. In your workflow, find the data path containing the credential. This varies depending on the exact parameter names in your data. Make sure that the data containing the credential is available in the workflow when you get to the node that needs it. 1. When creating your credential, hover over the field where you want to use an expression. 1. Toggle **Expression** on. 1. Enter your expression. ### Example workflow [View workflow file](/_workflows/credentials/dynamic_credentials_using_expressions.json) #### Using the example To load the template into your n8n instance: 1. Download the workflow JSON file. 1. Open a new workflow in your n8n instance. 1. Copy in the JSON, or select **Workflow menu** > **Import from file...**. The example workflows use Sticky Notes to guide you: - Yellow: notes and information. - Green: instructions to run the workflow. - Orange: you need to change something to make the workflow work. - Blue: draws attention to a key feature of the example. # Credential sharing Feature availability Available on all Cloud plans, and Business and Enterprise self-hosted plans. You can share a credential directly with other users to use in their own workflows. Or share a credential in a [project](../../glossary/#project-n8n) for all members of that project to use. Any users using a shared credential won't be able to view or edit the credential details. Users can share credentials they created and own. Only project admins can share credentials created in and owned by a project. Instance owners and instance admins can view and share all credentials on an instance. Refer to [Account types](../../user-management/account-types/) for more information about owners and admins. In [projects](../../user-management/rbac/), a user's role controls how they can interact with the workflows and credentials associated to the projects they're a member of. ## Share a credential To share a credential: 1. From the left menu, select either **Overview** or a project. 1. Select **Credentials** to see a list of your credentials. 1. Select the credential you want to share. 1. Select **Sharing**. 1. In the **Share with projects or users** dropdown, browse or search for the user or project with which you want to share your credentials. 1. Select a user or project. 1. Select **Save** to apply the changes. ## Remove access to a credential To unshare a credential: 1. From the left menu, select either **Overview** or a project. 1. Select **Credentials** to see a list of your credentials. 1. Select the credential you want to unshare. 1. Select **Sharing**. 1. Select **trash icon** on the user or project you want to remove from the list of shared users and projects. 1. Select **Save** to apply the changes. # Overview In n8n, data flows through your workflow from node to node. Each node receives data, processes it, and passes the results to the next node. Understanding how data moves and transforms in your workflows is essential for building effective workflows. ## How data works in n8n **Data flows through nodes**: When you connect nodes, data automatically passes from one to the next. Each node processes the incoming data and outputs results based on its configuration. **View data at every stage**: You can inspect data at any point in your workflow: - **Node details view**: Double-click any node to see its input and output data. Choose between **Schema**, **Table** and **JSON** views. Schema view shows a simplified structure from the first item only, Table and JSON display the full dataset. - **Execution logs**: Review past workflow runs to see the data that passed through each node. **Reference previous data**: Use [data mapping](data-mapping/) to reference data from earlier nodes in your workflow. You can: - Select values from previous nodes using the UI - Write [expressions](expressions/) to dynamically access and combine data - Reference specific nodes by name to get their output **Transform data**: n8n provides multiple ways to modify data: - Use dedicated transformation nodes (Aggregate, Split Out, Sort, and more) - Write [expressions](expressions-for-transformation/) directly in node parameters - Use the [Code node](expressions/#code-node) for custom JavaScript or Python logic - Apply the [AI Transform node](expressions/#ai-transform-node) for AI-assisted transformations **Understand the data structure**: n8n uses a [consistent data structure](data-structure/) across all nodes, making it predictable how data flows and transforms throughout your workflows. ## In this section - [How n8n structures data](data-structure/) - [Transforming data](transforming-data/) - [Processing data using code](expressions/#code-node) - [Pinning, mocking, and editing data](data-pinning/) during workflow development - [Referencing data](data-mapping/) and [item linking](data-mapping/data-item-linking/): how data items link to each other # Filtering data Filtering in n8n can mean different things depending on what you want to accomplish. This guide covers both visual filtering in the UI and data filtering during workflow execution. ## Filter data visually in the UI Feature availability Available on Community, Cloud Pro, and Enterprise plans. Search and filter data in the node **INPUT** and **OUTPUT** panels. Use this to check your node's data and find specific items. To search: 1. In a node, select **Search** in the **INPUT** or **OUTPUT** panel. 1. Enter your search term. n8n filters as you type, displaying the objects or rows containing the term. Filtering is purely visual: n8n doesn't change or delete data. The filter resets when you close and reopen the node. ## Filter data during workflow execution To actually remove or filter data in your workflow, use these approaches: ### Filter out items To remove entire items from your workflow based on conditions, use the [Filter node](../../integrations/builtin/core-nodes/n8n-nodes-base.filter/). This node evaluates conditions and only passes through items that meet your criteria. ### Filter out fields To remove specific fields from an item or object while keeping the item itself, use the [Edit Fields (Set) node](../../integrations/builtin/core-nodes/n8n-nodes-base.set/). Configure it to remove the fields you don't need. ### Filter array elements To filter elements within an array inside an item, use the `.filter()` method in an expression or Code node. For example: ``` {{ $json.myArray.filter(item => item.value > 10) }} ``` This removes array elements that don't match your condition while preserving the item structure. ### Filter out duplicate items from previous executions To remove items that have been seen in previous executions of a workflow, use the [Remove Duplicates](../../integrations/builtin/core-nodes/n8n-nodes-base.removeduplicates/) node. Use this when an event fires multiple times but you only want to process the first occurrence. # Pinning and mocking data When developing workflows, you might want to test your logic without repeatedly calling external systems or working with live data. n8n provides two related features to help with this: - **Data mocking**: Create or simulate test data without connecting to real data sources - **Data pinning**: Save test data (mocked or real) and reuse it in future workflow executions instead of fetching fresh data Both approaches save time and resources during development, help you work with consistent datasets, and protect live systems from repeated test calls. For development only Data pinning and mocking are features to help test workflows during development. Data pinning isn't available for production workflow executions. ## Data mocking approaches Create test data to work with during development. You can create mock data in several ways: ### Generate custom data using the Code or Edit Fields nodes You can create a custom dataset in your workflow using either the [Code node](../../integrations/builtin/core-nodes/n8n-nodes-base.code/) or the [Edit Fields (Set) node](../../integrations/builtin/core-nodes/n8n-nodes-base.set/). In the Code node, you can create any data set you want, and return it as the node output. In the Edit Fields node, select **Add fields** to add your custom data. The Edit Fields node is a good choice for small tests. To create more complex datasets, use the Code node. **Use this approach when**: You need complete control over your test data structure and values, or when you want to test edge cases with specific data patterns. ### Output a sample data set from the Customer Datastore node The Customer Datastore node provides a fake dataset to work with. Add and execute the node to explore the data. **Use this approach when**: You need some test data when exploring n8n, and you don't have a real use-case to work with. Once you've created or obtained test data you want to reuse across multiple workflow executions, use [Data pinning](#data-pinning) to save it for consistent testing. ## Data pinning You can 'pin' data during workflow development. Data pinning means saving the output data of a node and using the saved data instead of fetching fresh data in future workflow executions. You can use this when working with data from external sources to avoid having to repeat requests to the external system. This can save time and resources: - If your workflow relies on an external system to trigger it, such as a webhook call, being able to pin data means you don't need to use the external system every time you test the workflow. - If the external resource has data or usage limits, pinning data during tests avoids consuming your resource limits. - You can fetch and pin the data you want to test, then have confidence that the data is consistent in all your workflow tests. - You can mock test data (using the approaches above), then pin it for reuse across executions. You can only pin data for nodes that have a single main output ("error" outputs don't count for this purpose). ### Pin data To pin data in a node: 1. Run the node to load data. 1. In the **OUTPUT** view, select **Pin data** . When data pinning is active, the button is disabled and a "This data is pinned" banner is displayed in the **OUTPUT** view. Nodes that output binary data You can't pin data if the output data includes binary data. ### Unpin data When data pinning is active, a banner appears at the top of the node's output panel indicating that n8n has pinned the data. To unpin data and fetch fresh data on the next execution, select the **Unpin** link in the banner. ### Edit pinned data n8n allows you to edit pinned data. This means you can check different scenarios without setting up each scenario and sending the relevant data from your external system. It makes it easier to test edge cases. For development only Data editing isn't available for production workflow executions. It's a feature to help test workflows during development. #### Edit output data To edit output data: 1. Run the node to load data. 1. In the **OUTPUT** view, select **JSON** to switch to JSON view. 1. Select **Edit** . 1. Edit your data. 1. Select **Save**. n8n saves your data changes and pins your data. #### Use data from previous executions You can copy data from nodes in previous workflow executions: 1. Open the left menu. 1. Select **Executions**. 1. Browse the workflow executions list to find the one with the data you want to copy. 1. Select **Open Past Execution** . 1. Double click the node whose data you want to copy. 1. If it's table layout, select **JSON** to switch to JSON view. 1. There are two ways to copy the JSON: 1. Select the JSON you want by highlighting it, like selecting text. Then use `ctrl` + `c` to copy it. 1. Select the JSON you want to copy by clicking on a parameter. Then: 1. Hover over the JSON. n8n displays the **Copy** button. 1. Select **Copy** . 1. You can choose what to copy: - **Copy Item Path** and **Copy Parameter Path** give you expressions that access parts of the JSON. - **Copy Value**: copies the entire selected JSON. 1. Return to the workflow you're working on: 1. Open the left menu. 1. Select **Workflows**. 1. Select **Open**. 1. Select the workflow you want to open. 1. Open the node where you want to use the copied data. 1. If there is no data, run the node to load data. 1. In the **OUTPUT** view, select **JSON** to switch to JSON view. 1. Select **Edit** . 1. Paste in the data from the previous execution. 1. Select **Save**. n8n saves your data changes and pins your data. ### Combine mocking with pinning For the most realistic testing experience, you can combine mocking and pinning approaches: 1. Create test data using one of the mocking approaches (Code node, Edit Fields node, or Customer Datastore) 1. Edit the test data to create specific test scenarios or edge cases 1. Pin the edited data for reuse across multiple workflow executions 1. Continue developing with this edited, pinned dataset This approach gives you complete control over your test data while ensuring consistent testing across multiple runs. # How n8n structures data Understanding how n8n structures and passes data between nodes is fundamental to building workflows. This guide covers both the data structure format and how data flows through your workflow. ## Data structure In n8n, all data passed between nodes is an array of objects. It has the following structure: ``` [ { // For most data: // Wrap each item in another object, with the key 'json' "json": { // Example data "apple": "beets", "carrot": { "dill": 1 } }, // For binary data: // Wrap each item in another object, with the key 'binary' "binary": { // Example data "apple-picture": { "data": "....", // Base64 encoded binary data (required) "mimeType": "image/png", // Best practice to set if possible (optional) "fileExtension": "png", // Best practice to set if possible (optional) "fileName": "example.png", // Best practice to set if possible (optional) } } }, ] ``` Skipping the `json` key and array syntax From 0.166.0 on, when using the Function node or Code node, n8n automatically adds the `json` key if it's missing. It also automatically wraps your items in an array (`[]`) if needed. This is only the case when using the Function or Code nodes. When building your own nodes, you must still make sure the node returns data with the `json` key. ## How data flows within nodes When you connect nodes in a workflow, data automatically passes from one node to the next. Nodes process multiple items automatically. When a node receives an array of data items, it processes each item individually and performs the configured operation for each one. For example, if you set the Trello node to `Create-Card`, and create an expression that sets `Name` using a property called `name-input-value` from the incoming data, the node creates a card for each item, always choosing the `name-input-value` of the current item. For example, this input will create two cards. One named `test1` the other one named `test2`: ``` [ { "name-input-value": "test1" }, { "name-input-value": "test2" } ] ``` ## Understand what you're mapping with drag and drop Data mapping maps the field path, and loads the field's value. For example, given the following data: ``` [ { "fruit": "apples", "color": "green" } ] ``` You can map `fruit` by dragging and dropping **fruit** from the **INPUT** into the field where you want to use its value. This creates an expression, `{{ $json.fruit }}`. When the node iterates over input items, the value of the field becomes the value of `fruit` for each item. ## Understand nested data Given the following data: ``` [ { "name": "First item", "nested": { "example-number-field": 1, "example-string-field": "apples" } }, { "name": "Second item", "nested": { "example-number-field": 2, "example-string-field": "oranges" } } ] ``` n8n displays it in table form like this: # Data tables ## Overview Data tables integrate data storage within your n8n environment. Using data tables, you can save, manage, and interact with data directly in your workflows without relying on external database systems for scenarios such as: - Persisting data across workflows in the same project - Storing markers to prevent duplicate runs or control workflow triggers - Reusing prompts or messages across workflows - Storing evaluation data for AI workflows - Storing data generated from workflow executions - Combining data from different sources to enrich your datasets - Creating lookup tables as quick reference points within workflows ## Working with data tables You can create, filter, and manage data tables and their data in three ways: using the **Data Table node**, the **DataTable API endpoint** , or the **Data tables tab**. ### Data Table node Use data tables inside workflows to store and manage data, enabling automated creation, retrieval, updates, and deletions as your workflow runs. See the [Data Table node](../../integrations/builtin/core-nodes/n8n-nodes-base.datatable/) for full documentation. ### DataTable API endpoint Work with data tables programmatically using the `/datatables` endpoint in the n8n API. See the [API reference](https://docs.n8n.io/api/api-reference/#tag/datatable) for full documentation. ### Data table tab View and work with data tables directly in the UI through a visual interface. This lets you browse and edit data, and manage tables without building a workflow. 1. In your n8n project, select the **Data tables** tab. 1. Click the split button located in the top right corner and select **Create Data table**. 1. Enter a descriptive name for your table. 1. Select how to create the table: - **From scratch**: Create a new table by manually defining columns and adding rows using the visual interface. - **Import CSV**: Upload a CSV file to automatically create the table structure and populate it with data from the file. In the table view that appears, you can: - Rename or delete the data table or its columns - Add and reorder columns to organize your data - Add, delete, and update rows - Edit existing data ## Exporting and importing data From the **Data tables** tab, you can: - Import CSV data directly into a data table, as described in the [previous section](#data-table-tab) - Download a CSV of your data table. Click the three dot menu in the top left and select **Download CSV**. ## Considerations and limitations of data tables - Data tables are suitable for light to moderate data storage. By default, the total storage used by all data tables in an instance is limited to 50MB. In self-hosted environments, you can increase this default size limit using the environment variable `N8N_DATA_TABLES_MAX_SIZE_BYTES`. - When your data tables approach 80% of your storage limit, n8n displays a warning. A final warning appears when you reach the storage limit. Exceeding this limit will disable manual additions to tables and cause workflow execution errors during attempts to insert or update data. - By default, data tables created within a project are accessible to all team members in that project. - Tables created in a **Personal** space are only accessible by their creator. - Direct programmatic access to data tables from a Code node isn't supported. You can't access data table values via built-in methods or variables. ## Data tables versus variables | Feature | Data tables | Variables | | -------------------------- | ----------- | --------- | | Unified tabular view | ✓ | ✗ | | Row-column relationships | ✓ | ✗ | | Cross-project access | ✗ | ✓ | | Individual value display | ✗ | ✓ | | Optimized for short values | ✗ | ✓ | | Structured data | ✓ | ✗ | | Scoped to projects | ✓ | ✗ | | Use values as expressions | ✗ | ✓ | # Expressions for data transformation You can use expression transformation functions anywhere expressions are supported in n8n. However, if your main goal is to transform data using expressions without performing any other operations, use the **Edit Fields (Set)** node. This node is designed specifically for data transformation, providing a clean interface to: - Add new fields with expression-based values - Modify existing field values using transformation functions - Remove or rename fields This keeps your workflow organized by separating data transformation from business logic, making it easier to understand and maintain. **Best practice**: Instead of adding complex expressions to multiple parameters across different nodes, use Edit Fields to prepare your data first, then pass the transformed data to subsequent nodes. See [Expression reference](../expression-reference/) for more information and examples. ### Example: Get data from webhook body Consider the following scenario: you have a webhook trigger that receives data through the webhook body. You want to extract some of that data for use in the workflow. Your webhook data looks similar to this: ``` [ { "headers": { "host": "n8n.instance.address", ... }, "params": {}, "query": {}, "body": { "name": "Jim", "age": 30, "city": "New York" } } ] ``` In the next node in the workflow, you want to get just the value of `city`. You can use the following expression: ``` {{$json.body.city}} ``` This expression: 1. Accesses the incoming JSON-formatted data using n8n's custom `$json` variable. 1. Finds the value of `city` (in this example, "New York"). Note that this example uses JMESPath syntax to query the JSON data. You can also write this expression as `{{$json['body']['city']}}`. ### Using expressions in credentials You can also use expressions in credential fields. When you reference data using expressions (for example, `{{$json.body.city}}` or `{{ $('Webhook').item.json.headers.authorization }}`), n8n evaluates the expression within the context of the current workflow execution. This means that: - Expressions in credentials can access data available in the current execution context, including data from previous nodes. - Each workflow execution has its own data context. - Expressions are evaluated per execution, so different executions don't share data. For example, if a webhook node receives an access token and you reference it in a credential field using an expression, the value is resolved using the execution data of that specific workflow run. ## Example: Writing longer JavaScript as expressions You can do things like variable assignments or multiple statements in an expression, but you need to wrap your code using the syntax for an Immediately Invoked Function Expression (IIFE). The following code use the Luxon date and time library to find the time between two dates in months. We surround the code in both the handlebar brackets for an expression and the IIFE syntax. ``` {{(()=>{ let end = DateTime.fromISO('2017-03-13'); let start = DateTime.fromISO('2017-02-13'); let diffInMonths = end.diff(start, 'months'); return diffInMonths.toObject(); })()}} ``` ## Common issues Here are some common errors and issues related to [expressions](../expressions/) and steps to resolve or troubleshoot them. ### The 'JSON Output' in item 0 contains invalid JSON This error occurs when you use JSON mode but don't provide a valid JSON object. Depending on the problem with the JSON object, the error sometimes displays as `The 'JSON Output' in item 0 does not contain a valid JSON object`. To resolve this, make sure that the code you provide is valid JSON: - Check the JSON with a [JSON validator](https://jsonlint.com/). - Check that your JSON object doesn't reference undefined input data. This may occur if the incoming data doesn't always include the same fields. ### Can't get data for expression This error occurs when n8n can't retrieve the data referenced by an expression. Often, this happens when the preceding node hasn't been run yet. Another variation of this may appear as `Referenced node is unexecuted`. In that case, the full text of this error will tell you the exact node that isn't executing in this format: > An expression references the node '', but it hasn't been executed yet. Either change the expression, or re-wire your workflow to make sure that node executes first. To begin troubleshooting, test the workflow up to the named node. For nodes that use JavaScript or other custom code, you can check if a previous node has executed before trying to use its value by checking the following: ``` $("").isExecuted ``` As an example, this JSON references the parameters of the input data. This error will display if you test this step without connecting it to another node: ``` { "my_field_1": {{ $input.params }} } ``` ### Invalid syntax This error occurs when you use an expression that has a syntax error. For example, the expression in this JSON includes a trailing period, which results in an invalid syntax error: ``` { "my_field_1": "value", "my_field_2": {{ $('If').item.json. }} } ``` To resolve this error, check your [expression syntax](../expressions/) to make sure it follows the expected format. # Expressions versus data nodes n8n provides multiple ways to work with and transform data. Understanding when to use each approach helps you build efficient workflows. | Approach | Use when you need to... | Examples | Available on | | ------------------------------- | -------------------------------------------------------- | ------------------------------------------------------------ | --------------------- | | Expressions | Set a single parameter value using existing data | Pull `{{$json.city}}`, format dates, simple math | Cloud and Self-hosted | | Code node | Write full JavaScript/Python for complex transformations | Restructure data, loop through items, use external libraries | Cloud and Self-hosted | | AI Transform node | Generate transformation code from natural language | `Group by user and sum totals`, `categorize by sentiment` | Cloud only | | Other data transformation nodes | Perform common operations with a visual interface | Aggregate items, split arrays, sort data, remove duplicates | Cloud and Self-hosted | ### Expressions Expressions are small pieces of JavaScript-like code you put directly into node parameters using n8n's `{{ ... }}` syntax. They can dynamically set parameter values by using data from previous nodes, workflow metadata, or environment variables. Use expressions when you can Expressions have the advantage of providing an immediate preview of the computed values, so use expressions where you can. **When to use expressions:** - To pull a value from previous node data. For example, `{{$json.body.city}}`. - To perform light transformations or calculations directly in a field. - To avoid adding extra nodes and to keep logic close to the parameter that you are setting. ### Code node The [Code node](../../integrations/builtin/core-nodes/n8n-nodes-base.code/) is a dedicated node where you write JavaScript or Python that runs as a workflow step. It gives you access to incoming data from previous nodes, which you can manipulate by adding, removing, or updating items. You can create any custom function you need and use n8n's built‑in methods and variables through `$` syntax. **When to use the Code node:** - You need more complex logic or data transformation than an expression can provide, such as restructuring arrays and objects, aggregating or splitting items, and custom algorithms. - You want to transform many items at once. - You want to use promises, `console.log`, or, in the case of self‑hosted setups, external npm modules. ### AI Transform node This node generates code snippets based on a short natural‑language prompt. It's context‑aware and understands your workflow's nodes and data types. The generated code is read‑only in the node; you can copy it into a Code node to edit. **When to use the AI Transform node:** - You know what transformation you want but don't want to hand‑write the code. - You want AI to draft the transformation logic and then run it directly in the node, or copy into a Code node for further customization. ### Other data transformation nodes n8n provides a collection of nodes to transform data: - [Aggregate](../../integrations/builtin/core-nodes/n8n-nodes-base.aggregate/): take separate items, or portions of them, and group them together into individual items. - [Limit](../../integrations/builtin/core-nodes/n8n-nodes-base.limit/): remove items beyond a defined maximum number. - [Remove Duplicates](../../integrations/builtin/core-nodes/n8n-nodes-base.removeduplicates/): identify and delete items that are identical across all fields or a subset of fields. - [Sort](../../integrations/builtin/core-nodes/n8n-nodes-base.sort/): organize lists in a desired ordering, or generate a random selection. - [Split Out](../../integrations/builtin/core-nodes/n8n-nodes-base.splitout/): separate a single data item containing a list into multiple items. - [Summarize](../../integrations/builtin/core-nodes/n8n-nodes-base.summarize/): aggregate items together, in a manner similar to Excel pivot tables. **When to use data transformation nodes:** - The operation you need matches a specific transformation node's purpose. - You want a no-code solution with a guided UI. - You prefer visual workflow building over writing expressions or code. # Approaches for transforming data Data transformation in n8n involves modifying, restructuring, or enriching data as it moves through your workflow. This includes changing data formats, filtering or aggregating values, adding computed fields, and converting data structures to work with different nodes. n8n uses a predefined [data structure](../data-structure/) that allows all nodes to process incoming data correctly. When your data doesn't match this structure, or when you need to modify it for your use case, you'll need to transform it. n8n provides several approaches for data transformation: - [Expressions](../expressions/#expressions) allow you to transform data directly in node parameters using n8n's expression syntax (`{{ }}`) - The [Code node](../expressions/#code-node) lets you write custom JavaScript or Python for complex transformations. - The [AI Transform node](../expressions/#ai-transform-node) generates transformation code from natural language prompts. - Advanced transformation techniques: For sophisticated data manipulation, n8n supports: - **Ternary operators**: Conditional logic directly in expressions (`condition ? valueIfTrue : valueIfFalse`) - **Chained functions**: Combine multiple transformation functions - **Complex expressions**: Use JavaScript methods and operators within expression syntax - Specialized transformation nodes for common structural transformations: - [Aggregate](../../integrations/builtin/core-nodes/n8n-nodes-base.aggregate/): group separate items together - [Limit](../../integrations/builtin/core-nodes/n8n-nodes-base.limit/): restrict the number of items - [Remove Duplicates](../../integrations/builtin/core-nodes/n8n-nodes-base.removeduplicates/): eliminate identical items - [Sort](../../integrations/builtin/core-nodes/n8n-nodes-base.sort/): order items or randomize - [Split Out](../../integrations/builtin/core-nodes/n8n-nodes-base.splitout/): separate lists into individual items - [Summarize](../../integrations/builtin/core-nodes/n8n-nodes-base.summarize/): aggregate data like Excel pivot tables For a comparison of these approaches, see [Expressions versus data nodes](../expressions/). # Referencing data Referencing data, or data mapping, means accessing information from previous nodes in your workflow. This allows you to use output from earlier steps as input for later nodes, creating dynamic workflows that pass data through multiple operations. When you reference data, you're not changing it. You're pointing to values that already exist so you can use them in node parameters, expressions, or custom code. If you want to change the data you're referencing, see [Transforming data](../transforming-data/). ## How to reference data The main way to reference data is using [expressions](../expressions/#expressions). You can create expressions by typing them in a parameter's field or dragging and dropping fields from the Input panel in the UI. Expressions will automatically figure out the correct item to use using [item linking](data-item-linking/). # Referencing data in the UI Data mapping means referencing data from previous nodes. It doesn't include changing (transforming) data, just referencing it. When you need data from a particular node in your workflow, you can [reference nodes by name](../referencing-other-nodes/). This is useful when your workflow has multiple branches or when you need to access data from several steps back. You can map data in the following ways: - Using the expressions editor. - By dragging and dropping data from the **INPUT** pane into node parameters. This generates the expression for you. For information on errors with mapping and linking items, refer to [Item linking errors](../data-item-linking/item-linking-errors/). See [Common ways of referencing](../referencing-other-nodes/#common-ways-of-referencing). # Accessing linked items in the Code node Every item in a node's input data links back to the items used in previous nodes to generate it. This is useful if you need to retrieve linked items from further back than the immediate previous node. To access the linked items from earlier in the workflow, use `("").itemMatching(currentNodeinputIndex)`. For example, consider a workflow that does the following: 1. The Customer Datastore node generates example data: ``` [ { "id": "23423532", "name": "Jay Gatsby", "email": "gatsby@west-egg.com", "notes": "Keeps asking about a green light??", "country": "US", "created": "1925-04-10" }, { "id": "23423533", "name": "José Arcadio Buendía", "email": "jab@macondo.co", "notes": "Lots of people named after him. Very confusing", "country": "CO", "created": "1967-05-05" }, ... ] ``` 1. The Edit Fields node simplifies this data: ``` [ { "name": "Jay Gatsby" }, { "name": "José Arcadio Buendía" }, ... ] ``` 1. The Code node restores the email address to the correct person: ``` [ { "name": "Jay Gatsby", "restoreEmail": "gatsby@west-egg.com" }, { "name": "José Arcadio Buendía", "restoreEmail": "jab@macondo.co" }, ... ] ``` The Code node does this using the following code: ``` for(let i=0; i<$input.all().length; i++) { $input.all()[i].json.restoreEmail = $('Customer Datastore (n8n training)').itemMatching(i).json.email; } return $input.all(); ``` ``` for i,item in enumerate(_input.all()): _input.all()[i].json.restoreEmail = _('Customer Datastore (n8n training)').itemMatching(i).json.email return _input.all(); ``` You can view and download the example workflow from [n8n website | itemMatching usage example](https://n8n.io/workflows/1966-itemmatching-usage-example/). # Referencing previous nodes When working with data in n8n, you'll often need to reference information from the current node or from previous nodes in your workflow. ## Common ways of referencing The most frequently used methods for accessing data are: - **`$json`**: Access JSON data from the current input item - **`$('').item.json`**: Access JSON data from a [linked item](../data-item-linking/) in a previous node ## Other referencing methods These methods work in both expressions and the Code node: | Method | Description | | -------------------------- | ---------------------------------------------- | | `$binary` | Access binary data from the current input item | | `$input.item` | The input item currently being processed | | `$('').first()` | Get the first item from a specified node | | `$('').last()` | Get the last item from a specified node | | `$('').all()` | Get all items from a specified node | ## Current node input Methods for working with the input of the current node. Some methods and variables aren't available in the Code node. Python support You can use Python in the Code node. It isn't available in expressions. | Method | Description | Available in Code node? | | ---------------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------- | | `$binary` | Shorthand for `$input.item.binary`. Incoming binary data from a node | | | `$input.item` | The input item of the current node that's being processed. Refer to [Item linking](../data-item-linking/) for more information on paired items and item linking. | | | `$input.all()` | All input items in current node. | | | `$input.first()` | First input item in current node. | | | `$input.last()` | Last input item in current node. | | | `$input.params` | Object containing the query settings of the previous node. This includes data such as the operation it ran, result limits, and so on. | | | `$json` | Shorthand for `$input.item.json`. Incoming JSON data from a node. Refer to [Data structure](../../data-structure/) for information on item structure. | (when running once for each item) | | `$input.context.noItemsLeft` | Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items. | | | Method | Description | | ---------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `_input.item` | The input item of the current node that's being processed. Refer to [Item linking](../data-item-linking/) for more information on paired items and item linking. | | `_input.all()` | All input items in current node. | | `_input.first()` | First input item in current node. | | `_input.last()` | Last input item in current node. | | `_input.params` | Object containing the query settings of the previous node. This includes data such as the operation it ran, result limits, and so on. | | `_json` | Shorthand for `_input.item.json`. Incoming JSON data from a node. Refer to [Data structure](../../data-structure/) for information on item structure. Available when you set **Mode** to **Run Once for Each Item**. | | `_input.context.noItemsLeft` | Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items. | ## Output of other nodes Methods for working with the output of other nodes. Some methods and variables aren't available in the Code node. | Method | Description | Available in Code node? | | ------------------------------------------------------ | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------- | | `$("").all(branchIndex?, runIndex?)` | Returns all items from a given node. If `branchIndex` isn't given it will default to the output that connects `node-name` with the node where you use the expression or code. | | | `$("").first(branchIndex?, runIndex?)` | The first item output by the given node. If `branchIndex` isn't given it will default to the output that connects `node-name` with the node where you use the expression or code. | | | `$("").last(branchIndex?, runIndex?)` | The last item output by the given node. If `branchIndex` isn't given it will default to the output that connects `node-name` with the node where you use the expression or code. | | | `$("").item` | The linked item. This is the item in the specified node used to produce the current item. Refer to [Item linking](../data-item-linking/) for more information on item linking. | | | `$("").params` | Object containing the query settings of the given node. This includes data such as the operation it ran, result limits, and so on. | | | `$("").context` | Boolean. Only available when working with the Loop Over Items node. Provides information about what's happening in the node. Use this to determine whether the node is still processing items. | | | `$("").itemMatching(currentNodeInputIndex)` | Use instead of `$("").item` in the Code node if you need to trace back from an input item. | | # Linking data items An item is a single piece of data. Nodes receive one or more items, operate on them, and output new items. Each item links back to the items in the previous nodes that generated it. Usually this just works. You need to understand this behavior in detail if you're: - Using the Code node for complex behaviors with input and output data. - Building a programmatic-style node. This section provides: - A conceptual overview of [Item linking concepts](item-linking-concepts/). - Information on [Item linking for node creators](item-linking-node-building/). - Support for end users who need to [work with the data path](item-linking-code-node/) to retrieve item data from previous nodes and link items when using the Code node. - Guidance on troubleshooting [errors](item-linking-errors/). # Preserving linking in the Code node When referencing a previous node you need to know which item to use. This is solved by item linking. Most nodes automatically link every output item to an input item, creating a chain of items that you can work back along to access previous items. For a deeper conceptual overview of this topic, refer to [Item linking concepts](../item-linking-concepts/). This document focuses on practical usage examples. When using the Code node, there are some scenarios where you need to manually supply item linking information if you want to be able to use `$("").item` later in the workflow. All these scenarios only apply if you have more than one incoming item. n8n automatically handles item linking for single items. These scenarios are when you: - Add new items: the new items aren't linked to any input. - Return new items. - Want to manually control the item linking. [n8n's automatic item linking](../item-linking-concepts/) handles the other scenarios. To control item linking, set `pairedItem` when returning data. For example, to link to the item at index 0: ``` [ { "json": { . . . }, // The index of the input item that generated this output item "pairedItem": 0 } ] ``` ### `pairedItem` usage example Take this input data: ``` [ { "id": "23423532", "name": "Jay Gatsby" }, { "id": "23423533", "name": "José Arcadio Buendía" }, { "id": "23423534", "name": "Max Sendak" }, { "id": "23423535", "name": "Zaphod Beeblebrox" }, { "id": "23423536", "name": "Edmund Pevensie" } ] ``` And use it to generate new items, containing just the name, along with a new piece of data: ``` newItems = []; for(let i=0; i ERROR: Info for expression missing from previous node There's a node in the chain that doesn't return pairing information. The solution here depends on the type of the previous node: - Code nodes: make sure you return which input items the node used to produce each output item. Refer to [Preserving linking in the Code node](../item-linking-code-node/) for more information. - Custom or community nodes: the node creator needs to update the node to return which input items it uses to produce each output item. Refer to [Item linking for node creators](../item-linking-node-building/) for more information. ### Fix for 'Multiple matching items for expression' This is the error message: > ERROR: Multiple matching items for expression Sometimes n8n uses multiple items to create a single item. Examples include the Summarize, Aggregate, and Merge nodes. These nodes can combine information from multiple items. When you use `.item` and there are multiple possible matches, n8n doesn't know which one to use. To solve this you can either: - Use `.first()`, `.last()` or `.all()[index]` instead. Refer to [Referencing previous nodes](../../referencing-other-nodes/) for more detail on these methods. - Reference a different node that contains the same information, but doesn't have multiple matching items. # Item linking for node creators Programmatic-style nodes only This guidance applies to programmatic-style nodes. If you're using declarative style, n8n handles paired items for you automatically. Use n8n's item linking to access data from items that precede the current item. n8n needs to know which input item a given output item comes from. If this information is missing, expressions in other nodes may break. As a node developer, you must ensure any items returned by your node support this. This applies to programmatic nodes (including trigger nodes). You don't need to consider item linking when building a declarative-style node. Refer to [Choose your node building approach](../../../../integrations/creating-nodes/plan/choose-node-method/) for more information on node styles. Start by reading [Item linking concepts](../item-linking-concepts/), which provides a conceptual overview of item linking, and details of the scenarios where n8n can handle the linking automatically. If you need to handle item linking manually, do this by setting `pairedItem` on each item your node returns: ``` // Use the pairedItem information of the incoming item newItem = { "json": { . . . }, "pairedItem": { "item": item.pairedItem, // Optional: choose the input to use // Set this if your node combines multiple inputs "input": 0 }; // Or set the index manually newItem = { "json": { . . . } "pairedItem": { "item": i, // Optional: choose the input to use // Set this if your node combines multiple inputs "input": 0 }, }; ``` # Expression Reference These are some commonly used expressions. A more exhaustive list appears below. | Category | Expression | Description | | ------------------------------ | --------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | Access current input item data | `$json` | JSON data of the current item | | | `$json.fieldName` | Field of the current item | | | `$binary` | Binary data of current item | | Access previous node data | `$("NodeName").first()` | First item in a node | | | `$("NodeName").item` | Linked item of a node. See [Item linking](../data-mapping/data-item-linking/) for more information. | | | `$("NodeName").all()` | All items of a node | | | `$("NodeName").last()` | Last item of a node | | Date/Time | `$now` | Current date and time | | | `$today` | Today's date | | | `$now.toFormat("yyyy-MM-dd")` | Format current date as a string | | Conditionals | `$if(condition, "true", "false")` | Helper function that returns a value when a condition is true or false | | | `condition ? true : false` | Ternary operator: returns one value if a condition is true, another if false | | | `$ifEmpty(value, defaultValue)` | Helper function takes two parameters and tests the first to check if it's empty, then returns either the parameter (if not empty) or the second parameter (if the first is empty). The first parameter is empty if it's `undefined`, `null`, an empty string `''`, an array where `value.length` returns `false` , or an object where `Object.keys(value).length` returns `false` | | String Methods | `text.toUpperCase()` | Convert to uppercase | | | `text.toLowerCase()` | Convert to lowercase | | | `text.includes("foo")` | Check if text contains search term | | | `text.extractEmail()` | Extract email from text | | Array Methods | `array.length` | Get array length | | | `array.join(", ")` | Join array elements using a comma a separator | | | `array.filter(x => x <= 20)` | Filter items of array based on the filtering condition | | | `array.map(x => x.id)` | Transform items of an array | Browse the tables below to find methods by the data type on which they act. Click a method name to read detailed documentation for it. ## Array - [*`Array`*.**`append(elem1, elem2?, ..., elemN?)`**](array/#arrayappend) Adds new elements to the end of the array. Similar to `push()`, but returns the modified array. Consider using spread syntax instead (see examples). - [*`Array`*.**`average()`**](array/#arrayaverage) Returns the average of the numbers in the array. Throws an error if there are any non-numbers. - [*`Array`*.**`chunk(length)`**](array/#arraychunk) Splits the array into an array of sub-arrays, each with the given length - [*`Array`*.**`compact()`**](array/#arraycompact) Removes any empty values from the array. `null`, `""` and `undefined` count as empty. - [*`Array`*.**`concat(array2, array3?, ... arrayN?)`**](array/#arrayconcat) Joins one or more arrays onto the end of the base array - [*`Array`*.**`difference(otherArray)`**](array/#arraydifference) Compares two arrays. Returns all elements in the base array that aren't present in `otherArray`. - [*`Array`*.**`filter(function(element, index?, array?), thisValue?)`**](array/#arrayfilter) Returns an array with only the elements satisfying a condition. The condition is a function that returns `true` or `false`. - [*`Array`*.**`find(function(element, index?, array?), thisValue?)`**](array/#arrayfind) Returns the first element from the array that satisfies the provided condition. The condition is a function that returns `true` or `false`. Returns `undefined` if no matches are found. If you need all matching elements, use `filter()`. - [*`Array`*.**`first()`**](array/#arrayfirst) Returns the first element of the array - [*`Array`*.**`includes(element, start?)`**](array/#arrayincludes) Returns `true` if the array contains the specified element - [*`Array`*.**`indexOf(element, start?)`**](array/#arrayindexof) Returns the position of the first matching element in the array, or -1 if the element isn’t found. Positions start at 0. - [*`Array`*.**`intersection(otherArray)`**](array/#arrayintersection) Compares two arrays. Returns all elements in the base array that are also present in the other array. - [*`Array`*.**`isEmpty()`**](array/#arrayisempty) Returns `true` if the array has no elements or is `null` - [*`Array`*.**`isNotEmpty()`**](array/#arrayisnotempty) Returns `true` if the array has at least one element - [*`Array`*.**`join(separator?)`**](array/#arrayjoin) Merges all elements of the array into a single string, with an optional separator between each element. The opposite of `split()`. - [*`Array`*.**`last()`**](array/#arraylast) Returns the last element of the array - [*`Array`*.**`length`**](array/#arraylength) The number of elements in the array - [*`Array`*.**`map(function(element, index?, array?), thisValue?)`**](array/#arraymap) Creates a new array by applying a function to each element of the original array - [*`Array`*.**`max()`**](array/#arraymax) Returns the largest number in the array. Throws an error if there are any non-numbers. - [*`Array`*.**`min()`**](array/#arraymin) Returns the smallest number in the array. Throws an error if there are any non-numbers. - [*`Array`*.**`pluck(fieldName1?, fieldName2?, …)`**](array/#arraypluck) Returns an array containing the values of the given field(s) in each Object of the array. Ignores any array elements that aren’t Objects or don’t have a key matching the field name(s) provided. - [*`Array`*.**`randomItem()`**](array/#arrayrandomitem) Returns a randomly-chosen element from the array - [*`Array`*.**`reduce(function(prevResult, currentElem, currentIndex?, array?), initResult)`**](array/#arrayreduce) Reduces an array to a single value by applying a function to each element. The function combines the current element with the result of reducing the previous elements, producing a new result. - [*`Array`*.**`removeDuplicates(keys?)`**](array/#arrayremoveduplicates) Removes any re-occurring elements from the array - [*`Array`*.**`renameKeys(from, to)`**](array/#arrayrenamekeys) Changes all matching keys (field names) of any Objects in the array. Rename more than one key by adding extra arguments, i.e. `from1, to1, from2, to2, ...`. - [*`Array`*.**`reverse()`**](array/#arrayreverse) Reverses the order of the elements in the array - [*`Array`*.**`slice(start, end)`**](array/#arrayslice) Returns a portion of the array, from the `start` index up to (but not including) the `end` index. Indexes start at 0. - [*`Array`*.**`smartJoin(keyField, nameField)`**](array/#arraysmartjoin) Creates a single Object from an array of Objects. Each Object in the array provides one field for the returned Object. Each Object in the array must contain a field with the key name and a field with the value. - [*`Array`*.**`sort(compareFunction(a, b)?)`**](array/#arraysort) Reorders the elements of the array. For sorting strings alphabetically, no parameter is required. For sorting numbers or Objects, see examples. - [*`Array`*.**`sum()`**](array/#arraysum) Returns the total of all the numbers in the array. Throws an error if there are any non-numbers. - [*`Array`*.**`toJsonString()`**](array/#arraytojsonstring) Converts the array to a JSON string. The same as JavaScript’s `JSON.stringify()`. - [*`Array`*.**`toSpliced(start, deleteCount, elem1, ....., elemN)`**](array/#arraytospliced) Adds and/or removes array elements at a given position. See also `slice()` and `append()`. - [*`Array`*.**`toString()`**](array/#arraytostring) Converts the array to a string, with values separated by commas. To use a different separator, use `join()` instead. - [*`Array`*.**`union(otherArray)`**](array/#arrayunion) Concatenates two arrays and then removes any duplicates - [*`Array`*.**`unique()`**](array/#arrayunique) Removes any duplicate elements from the array ## BinaryFile - [`binaryFile`.**`directory`**](binaryfile/#binaryfiledirectory) The path to the directory that the file is stored in. Useful for distinguishing between files with the same name in different directories. Not set if n8n is configured to store files in its database. - [`binaryFile`.**`fileExtension`**](binaryfile/#binaryfilefileextension) The suffix attached to the filename (e.g. `txt`) - [`binaryFile`.**`fileName`**](binaryfile/#binaryfilefilename) The name of the file, including extension - [`binaryFile`.**`fileSize`**](binaryfile/#binaryfilefilesize) A string representing the size of the file - [`binaryFile`.**`fileType`**](binaryfile/#binaryfilefiletype) A string representing the type of the file, e.g. `image`. Corresponds to the first part of the MIME type. - [`binaryFile`.**`id`**](binaryfile/#binaryfileid) The unique ID of the file. Used to identify the file when it is stored on disk or in a storage service such as S3. - [`binaryFile`.**`mimeType`**](binaryfile/#binaryfilemimetype) A string representing the format of the file’s contents, e.g. `image/jpeg` ## Boolean - [*`Boolean`*.**`isEmpty()`**](boolean/#booleanisempty) Returns `false` for all booleans. Returns `true` for `null`. - [*`Boolean`*.**`toNumber()`**](boolean/#booleantonumber) Converts `true` to 1 and `false` to 0 - [*`Boolean`*.**`toString()`**](boolean/#booleantostring) Converts `true` to the string ‘true’ and `false` to the string ‘false’ ## CustomData - [`$execution.customData`.**`get(key)`**](customdata/#executioncustomdataget) Returns the custom execution data stored under the given key. [More info](/workflows/executions/custom-executions-data/) - [`$execution.customData`.**`getAll()`**](customdata/#executioncustomdatagetall) Returns all the key-value pairs of custom data that have been set in the current execution. [More info](/workflows/executions/custom-executions-data/) - [`$execution.customData`.**`set(key, value)`**](customdata/#executioncustomdataset) Stores custom execution data under the key specified. Use this to easily filter executions by this data. [More info](/workflows/executions/custom-executions-data/) - [`$execution.customData`.**`setAll(obj)`**](customdata/#executioncustomdatasetall) Sets multiple key-value pairs of custom data for the execution. Use this to easily filter executions by this data. [More info](/workflows/executions/custom-executions-data/) ## Date - [*`Date`*.**`toDateTime()`**](date/#datetodatetime) Converts a JavaScript Date to a Luxon DateTime. The DateTime contains the same information, but is easier to manipulate. ## DateTime - [*`DateTime`*.**`day`**](datetime/#datetimeday) The day of the month (1-31) - [*`DateTime`*.**`diffTo(otherDateTime, unit)`**](datetime/#datetimediffto) Returns the difference between two DateTimes, in the given unit(s) - [*`DateTime`*.**`diffToNow(unit)`**](datetime/#datetimedifftonow) Returns the difference between the current moment and the DateTime, in the given unit(s). For a textual representation, use `toRelative()` instead. - [*`DateTime`*.**`endOf(unit, opts)`**](datetime/#datetimeendof) Rounds the DateTime up to the end of one of its units, e.g. the end of the month - [*`DateTime`*.**`equals(other)`**](datetime/#datetimeequals) Returns `true` if the two DateTimes represent exactly the same moment and are in the same time zone. For a less strict comparison, use `hasSame()`. - [*`DateTime`*.**`extract(unit?)`**](datetime/#datetimeextract) Extracts a part of the date or time, e.g. the month, as a number. To extract textual names instead, see `format()`. - [*`DateTime`*.**`format(fmt)`**](datetime/#datetimeformat) Converts the DateTime to a string, using the format specified. [Formatting guide](https://moment.github.io/luxon/#/formatting?id=table-of-tokens). For common formats, `toLocaleString()` may be easier. - [*`DateTime`*.**`hasSame(otherDateTime, unit)`**](datetime/#datetimehassame) Returns `true` if the two DateTimes are the same, down to the unit specified. Time zones are ignored (only local times are compared), so use `toUTC()` first if needed. - [*`DateTime`*.**`hour`**](datetime/#datetimehour) The hour of the day (0-23) - [*`DateTime`*.**`isBetween(date1, date2)`**](datetime/#datetimeisbetween) Returns `true` if the DateTime lies between the two moments specified - [*`DateTime`*.**`isInDST`**](datetime/#datetimeisindst) Whether the DateTime is in daylight saving time - [*`DateTime`*.**`locale`**](datetime/#datetimelocale) The locale of a DateTime, such 'en-GB'. The locale is used when formatting the DateTime. - [*`DateTime`*.**`millisecond`**](datetime/#datetimemillisecond) The millisecond of the second (0-999) - [*`DateTime`*.**`minus(n, unit?)`**](datetime/#datetimeminus) Subtracts a given period of time from the DateTime - [*`DateTime`*.**`minute`**](datetime/#datetimeminute) The minute of the hour (0-59) - [*`DateTime`*.**`month`**](datetime/#datetimemonth) The month (1-12) - [*`DateTime`*.**`monthLong`**](datetime/#datetimemonthlong) The textual long month name, e.g. 'October'. Defaults to the system's locale if no locale has been specified. - [*`DateTime`*.**`monthShort`**](datetime/#datetimemonthshort) The textual abbreviated month name, e.g. 'Oct'. Defaults to the system's locale if no locale has been specified. - [*`DateTime`*.**`plus(n, unit?)`**](datetime/#datetimeplus) Adds a given period of time to the DateTime - [*`DateTime`*.**`quarter`**](datetime/#datetimequarter) The quarter of the year (1-4) - [*`DateTime`*.**`second`**](datetime/#datetimesecond) The second of the minute (0-59) - [*`DateTime`*.**`set(values)`**](datetime/#datetimeset) Assigns new values to specified units of the DateTime. To round a DateTime, see also `startOf()` and `endOf()`. - [*`DateTime`*.**`setLocale(locale)`**](datetime/#datetimesetlocale) Sets the locale, which determines the language and formatting for the DateTime. Useful when generating a textual representation of the DateTime, e.g. with `format()` or `toLocaleString()`. - [*`DateTime`*.**`setZone(zone, opts)`**](datetime/#datetimesetzone) Converts the DateTime to the given time zone. The DateTime still represents the same moment unless specified in the options. See also `toLocal()` and `toUTC()`. - [*`DateTime`*.**`startOf(unit, opts)`**](datetime/#datetimestartof) Rounds the DateTime down to the beginning of one of its units, e.g. the start of the month - [*`DateTime`*.**`toISO(opts)`**](datetime/#datetimetoiso) Returns an ISO 8601-compliant string representation of the DateTime - [*`DateTime`*.**`toLocal()`**](datetime/#datetimetolocal) Converts a DateTime to the workflow’s local time zone. The DateTime still represents the same moment unless specified in the parameters. The workflow’s time zone can be set in the workflow settings. - [*`DateTime`*.**`toLocaleString(formatOpts)`**](datetime/#datetimetolocalestring) Returns a localised string representing the DateTime, i.e. in the language and format corresponding to its locale. Defaults to the system's locale if none specified. - [*`DateTime`*.**`toMillis()`**](datetime/#datetimetomillis) Returns a Unix timestamp in milliseconds (the number elapsed since 1st Jan 1970) - [*`DateTime`*.**`toRelative(options)`**](datetime/#datetimetorelative) Returns a textual representation of the time relative to now, e.g. ‘in two days’. Rounds down by default. - [*`DateTime`*.**`toSeconds()`**](datetime/#datetimetoseconds) Returns a Unix timestamp in seconds (the number elapsed since 1st Jan 1970) - [*`DateTime`*.**`toString()`**](datetime/#datetimetostring) Returns a string representation of the DateTime. Similar to `toISO()`. For more formatting options, see `format()` or `toLocaleString()`. - [*`DateTime`*.**`toUTC(offset, opts)`**](datetime/#datetimetoutc) Converts a DateTime to the UTC time zone. The DateTime still represents the same moment unless specified in the parameters. Use `setZone()` to convert to other zones. - [*`DateTime`*.**`weekday`**](datetime/#datetimeweekday) The day of the week. 1 is Monday and 7 is Sunday. - [*`DateTime`*.**`weekdayLong`**](datetime/#datetimeweekdaylong) The textual long weekday name, e.g. 'Wednesday'. Defaults to the system's locale if no locale has been specified. - [*`DateTime`*.**`weekdayShort`**](datetime/#datetimeweekdayshort) The textual abbreviated weekday name, e.g. 'Wed'. Defaults to the system's locale if no locale has been specified. - [*`DateTime`*.**`weekNumber`**](datetime/#datetimeweeknumber) The week number of the year (1-52ish) - [*`DateTime`*.**`year`**](datetime/#datetimeyear) The year - [*`DateTime`*.**`zone`**](datetime/#datetimezone) The time zone associated with the DateTime ## ExecData - [`$exec`.**`customData`**](execdata/#execcustomdata) Set and get custom execution data (e.g. to filter executions by). You can also do this with the ‘Execution Data’ node. [More info](/workflows/executions/custom-executions-data/) - [`$exec`.**`id`**](execdata/#execid) The ID of the current workflow execution - [`$exec`.**`mode`**](execdata/#execmode) Can be one of 3 values: either `test` (meaning the execution was triggered by clicking a button in n8n) or `production` (meaning the execution was triggered automatically). When running workflow tests, `evaluation` is used. - [`$exec`.**`resumeFormUrl`**](execdata/#execresumeformurl) The URL to access a form generated by the [’Wait’ node](/integrations/builtin/core-nodes/n8n-nodes-base.wait/). - [`$exec`.**`resumeUrl`**](execdata/#execresumeurl) The webhook URL to call to resume a workflow waiting at a [’Wait’ node](/integrations/builtin/core-nodes/n8n-nodes-base.wait/). ## HTTPResponse - [`$response`.**`body`**](httpresponse/#responsebody) The body of the response object from the last HTTP call. Only available in the ‘HTTP Request’ node - [`$response`.**`headers`**](httpresponse/#responseheaders) The headers returned by the last HTTP call. Only available in the ‘HTTP Request’ node. - [`$response`.**`statusCode`**](httpresponse/#responsestatuscode) The HTTP status code returned by the last HTTP call. Only available in the ‘HTTP Request’ node. - [`$response`.**`statusMessage`**](httpresponse/#responsestatusmessage) An optional message regarding the request status. Only available in the ‘HTTP Request’ node. ## Item - [`$item`.**`binary`**](item/#itembinary) Returns any binary data the item contains - [`$item`.**`json`**](item/#itemjson) Returns the JSON data the item contains. [More info](/data/data-structure/) ## NodeInputData - [`$input`.**`all(branchIndex?, runIndex?)`**](nodeinputdata/#inputall) Returns an array of the current node’s input items - [`$input`.**`first(branchIndex?, runIndex?)`**](nodeinputdata/#inputfirst) Returns the current node’s first input item - [`$input`.**`item`**](nodeinputdata/#inputitem) Returns the input item currently being processed - [`$input`.**`last(branchIndex?, runIndex?)`**](nodeinputdata/#inputlast) Returns the current node’s last input item - [`$input`.**`params`**](nodeinputdata/#inputparams) The configuration settings of the current node. These are the parameters you fill out within the node when configuring it (e.g. its operation). ## NodeOutputData - [`$()`.**`all(branchIndex?, runIndex?)`**](nodeoutputdata/#all) Returns an array of the node’s output items - [`$()`.**`first(branchIndex?, runIndex?)`**](nodeoutputdata/#first) Returns the first item output by the node - [`$()`.**`isExecuted`**](nodeoutputdata/#isexecuted) Is `true` if the node has executed, `false` otherwise - [`$()`.**`item`**](nodeoutputdata/#item) Returns the matching item, i.e. the one used to produce the current item in the current node. [More info](/data/data-mapping/data-item-linking/) - [`$()`.**`itemMatching(currentItemIndex?)`**](nodeoutputdata/#itemmatching) Returns the matching item, i.e. the one used to produce the item in the current node at the specified index. [More info](/data/data-mapping/data-item-linking/) - [`$()`.**`last(branchIndex?, runIndex?)`**](nodeoutputdata/#last) Returns the last item output by the node - [`$()`.**`params`**](nodeoutputdata/#params) The configuration settings of the given node. These are the parameters you fill out within the node’s UI (e.g. its operation). ## Number - [*`Number`*.**`abs()`**](number/#numberabs) Returns the number’s absolute value, i.e. removes any minus sign - [*`Number`*.**`ceil()`**](number/#numberceil) Rounds the number up to the next whole number - [*`Number`*.**`floor()`**](number/#numberfloor) Rounds the number down to the nearest whole number - [*`Number`*.**`format(locale?, options?)`**](number/#numberformat) Returns a formatted string representing the number. Useful for formatting for a specific language or currency. The same as [`Intl.NumberFormat()`](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat%E2%80%9D). - [*`Number`*.**`isEmpty()`**](number/#numberisempty) Returns `false` for all numbers. Returns `true` for `null`. - [*`Number`*.**`isEven()`**](number/#numberiseven) Returns `true` if the number is even. Throws an error if the number isn’t a whole number. - [*`Number`*.**`isInteger()`**](number/#numberisinteger) Returns `true` if the number is a whole number - [*`Number`*.**`isOdd()`**](number/#numberisodd) Returns `true` if the number is odd. Throws an error if the number isn’t a whole number. - [*`Number`*.**`round(decimalPlaces?)`**](number/#numberround) Returns the number rounded to the nearest whole number (or specified number of decimal places) - [*`Number`*.**`toBoolean()`**](number/#numbertoboolean) Converts the number to a boolean value. `0` becomes `false`; everything else becomes `true`. - [*`Number`*.**`toDateTime(format?)`**](number/#numbertodatetime) Converts a numerical timestamp into a DateTime. The format of the timestamp must be specified if it’s not in milliseconds. Uses the time zone in n8n (or in the workflow’s settings). - [*`Number`*.**`toLocaleString(locales?, options?)`**](number/#numbertolocalestring) Returns a localised string representing the number, i.e. in the language and format corresponding to its locale. Defaults to the system's locale if none specified. - [*`Number`*.**`toString(radix?)`**](number/#numbertostring) Converts the number to a simple textual representation. For more formatting options, see `toLocaleString()`. ## Object - [*`Object`*.**`compact()`**](object/#objectcompact) Removes all fields that have empty values, i.e. are `null` or `""` - [*`Object`*.**`hasField(name)`**](object/#objecthasfield) Returns `true` if there is a field called `name`. Only checks top-level keys. Comparison is case-sensitive. - [*`Object`*.**`isEmpty()`**](object/#objectisempty) Returns `true` if the Object has no keys (fields) set or is `null` - [*`Object`*.**`isNotEmpty()`**](object/#objectisnotempty) Returns `true` if the Object has at least one key (field) set - [*`Object`*.**`keepFieldsContaining(value)`**](object/#objectkeepfieldscontaining) Removes any fields whose values don’t at least partly match the given `value`. Comparison is case-sensitive. Fields that aren’t strings will always be removed. - [*`Object`*.**`keys()`**](object/#objectkeys) Returns an array with all the field names (keys) the object contains. The same as JavaScript’s `Object.keys(obj)`. - [*`Object`*.**`merge(otherObject)`**](object/#objectmerge) Merges the two Objects into a single one. If a key (field name) exists in both Objects, the value from the first (base) Object is used. - [*`Object`*.**`removeField(key)`**](object/#objectremovefield) Removes a field from the Object. The same as JavaScript’s `delete`. - [*`Object`*.**`removeFieldsContaining(value)`**](object/#objectremovefieldscontaining) Removes keys (fields) whose values at least partly match the given `value`. Comparison is case-sensitive. Fields that aren’t strings are always kept. - [*`Object`*.**`toJsonString()`**](object/#objecttojsonstring) Converts the Object to a JSON string. Similar to JavaScript’s `JSON.stringify()`. - [*`Object`*.**`urlEncode()`**](object/#objecturlencode) Generates a URL parameter string from the Object’s keys and values. Only top-level keys are supported. - [*`Object`*.**`values()`**](object/#objectvalues) Returns an array with all the values of the fields the Object contains. The same as JavaScript’s `Object.values(obj)`. ## PrevNodeData - [**`name`**](prevnodedata/#name) The name of the node that the current input came from. Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). - [**`outputIndex`**](prevnodedata/#outputindex) The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an ‘If’ or ‘Switch’ node). Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). - [**`runIndex`**](prevnodedata/#runindex) The run of the previous node that generated the current input. Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). ## Root - [**`$(nodeName)`**](root/) Returns the data of the specified node - [**`$binary`**](root/#binary) Returns any binary input data to the current node, for the current item. Shorthand for `$input.item.binary`. - [**`$execution`**](root/#execution) Retrieve or set metadata for the current execution - [**`$fromAI(key, description?, type?, defaultValue?)`**](root/#fromai) Use when a large language model should provide the value of a node parameter. Consider providing a description for better results. - [**`$if(condition, valueIfTrue, valueIfFalse)`**](root/#if) Returns one of two values depending on the `condition`. Similar to the `?` operator in JavaScript. - [**`$ifEmpty(value, valueIfEmpty)`**](root/#ifempty) Returns the first parameter if it isn’t empty, otherwise returns the second parameter. The following count as empty: `””`, `[]`, `{}`, `null`, `undefined` - [**`$input`**](root/#input) The input data of the current node - [**`$itemIndex`**](root/#itemindex) The position of the item currently being processed in the list of input items - [**`$jmespath(obj, expression)`**](root/#jmespath) Extracts data from an object (or array of objects) using a [JMESPath](%E2%80%9D/code/cookbook/jmespath/%E2%80%9D) expression. Useful for querying complex, nested objects. Returns `undefined` if the expression is invalid. - [**`$json`**](root/#json) Returns the JSON input data to the current node, for the current item. Shorthand for `$input.item.json`. [More info](/data/data-structure/) - [**`$max(num1, num2, …, numN)`**](root/#max) Returns the highest of the given numbers - [**`$min(num1, num2, …, numN)`**](root/#min) Returns the lowest of the given numbers - [**`$nodeVersion`**](root/#nodeversion) The version of the current node (as displayed at the bottom of the nodes’s settings pane) - [**`$now`**](root/#now) A DateTime representing the current moment. Uses the workflow’s time zone (which can be changed in the workflow settings). - [**`$pageCount`**](root/#pagecount) The number of results pages the node has fetched. Only available in the ‘HTTP Request’ node. - [**`$parameter`**](root/#parameter) The configuration settings of the current node. These are the parameters you fill out within the node’s UI (e.g. its operation). - [**`$prevNode`**](root/#prevnode) Information about the node that the current input came from. When in a ‘Merge’ node, always uses the first input connector. - [**`$request`**](root/#request) The request object sent during the last run of the node. Only available in the ‘HTTP Request’ node. - [**`$response`**](root/#response) The response returned by the last HTTP call. Only available in the ‘HTTP Request’ node. - [**`$runIndex`**](root/#runindex) The index of the current run of the current node execution. Starts at 0. - [**`$secrets`**](root/#secrets) The secrets from an [external secrets vault](/external-secrets/), if configured. Secret values are never displayed to the user. Only available in credential fields. - [**`$today`**](root/#today) A DateTime representing midnight at the start of the current day. Uses the instance’s time zone (unless overridden in the workflow’s settings). - [**`$vars`**](root/#vars) The [variables](/code/variables/) available to the workflow - [**`$workflow`**](root/#workflow) Information about the current workflow ## String - [*`String`*.**`base64Encode()`**](string/#stringbase64decode) Converts plain text to a base64-encoded string - [*`String`*.**`base64Encode()`**](string/#stringbase64encode) Converts a base64-encoded string to plain text - [*`String`*.**`concat(string1, string2?, ..., stringN?)`**](string/#stringconcat) Joins one or more strings onto the end of the base string. Alternatively, use the `+` operator (see examples). - [*`String`*.**`extractDomain()`**](string/#stringextractdomain) If the string is an email address or URL, returns its domain (or `undefined` if nothing found). If the string also contains other content, try using `extractEmail()` or `extractUrl()` first. - [*`String`*.**`extractEmail()`**](string/#stringextractemail) Extracts the first email found in the string. Returns `undefined` if none is found. - [*`String`*.**`extractUrl()`**](string/#stringextracturl) Extracts the first URL found in the string. Returns `undefined` if none is found. Only recognizes full URLs, e.g. those starting with `http`. - [*`String`*.**`extractUrlPath()`**](string/#stringextracturlpath) Returns the part of a URL after the domain, or `undefined` if no URL found. If the string also contains other content, try using `extractUrl()` first. - [*`String`*.**`hash(algo?)`**](string/#stringhash) Returns the string hashed with the given algorithm. Defaults to md5 if not specified. - [*`String`*.**`includes(searchString, start?)`**](string/#stringincludes) Returns `true` if the string contains the `searchString`. Case-sensitive. - [*`String`*.**`indexOf(searchString, start?)`**](string/#stringindexof) Returns the index (position) of the first occurrence of `searchString` within the base string, or -1 if not found. Case-sensitive. - [*`String`*.**`isDomain()`**](string/#stringisdomain) Returns `true` if the string is a domain - [*`String`*.**`isEmail()`**](string/#stringisemail) Returns `true` if the string is an email - [*`String`*.**`isEmpty()`**](string/#stringisempty) Returns `true` if the string has no characters or is `null` - [*`String`*.**`isNotEmpty()`**](string/#stringisnotempty) Returns `true` if the string has at least one character - [*`String`*.**`isNumeric()`**](string/#stringisnumeric) Returns `true` if the string represents a number - [*`String`*.**`isUrl()`**](string/#stringisurl) Returns `true` if the string is a valid URL - [*`String`*.**`length`**](string/#stringlength) The number of characters in the string - [*`String`*.**`match(regexp)`**](string/#stringmatch) Matches the string against a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). Returns an array containing the first match, or all matches if the `g` flag is set in the regular expression. Returns `null` if no matches are found. For checking whether text is present, consider `includes()` instead. - [*`String`*.**`parseJson()`**](string/#stringparsejson) Returns the JavaScript Object or value represented by the string, or `undefined` if the string isn’t valid JSON. Single-quoted JSON is not supported. - [*`String`*.**`quote(mark?)`**](string/#stringquote) Wraps a string in quotation marks, and escapes any quotation marks already in the string. Useful when constructing JSON, SQL, etc. - [*`String`*.**`removeMarkdown()`**](string/#stringremovemarkdown) Removes any Markdown formatting from the string. Also removes HTML tags. - [*`String`*.**`removeTags()`**](string/#stringremovetags) Removes tags, such as HTML or XML, from the string - [*`String`*.**`replace(pattern, replacement)`**](string/#stringreplace) Returns a string with the first occurrence of `pattern` replaced by `replacement`. To replace all occurrences, use `replaceAll()` instead. - [*`String`*.**`replaceAll(pattern, replacement)`**](string/#stringreplaceall) Returns a string with all occurrences of `pattern` replaced by `replacement` - [*`String`*.**`replaceSpecialChars()`**](string/#stringreplacespecialchars) Replaces special characters in the string with the closest ASCII character - [*`String`*.**`search(regexp)`**](string/#stringsearch) Returns the index (position) of the first occurrence of a pattern within the string, or -1 if not found. The pattern is specified using a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). To use text instead, see `indexOf()`. - [*`String`*.**`slice(start, end?)`**](string/#stringslice) Extracts a fragment of the string at the given position. For more advanced extraction, see `match()`. - [*`String`*.**`split(separator?, limit?)`**](string/#stringsplit) Splits the string into an array of substrings. Each split is made at the `separator`, and the separator isn’t included in the output. The opposite of using `join()` on an array. - [*`String`*.**`startsWith(searchString, start?)`**](string/#stringstartswith) Returns `true` if the string starts with `searchString`. Case-sensitive. - [*`String`*.**`substring(start, end?)`**](string/#stringsubstring) Extracts a fragment of the string at the given position. For more advanced extraction, see `match()`. - [*`String`*.**`toBoolean()`**](string/#stringtoboolean) Converts the string to a boolean value. `0`, `false` and `no` resolve to `false`, everything else to `true`. Case-insensitive. - [*`String`*.**`toDateTime()`**](string/#stringtodatetime) Converts the string to a DateTime. Useful for further transformation. Supported formats for the string are ISO 8601, HTTP, RFC2822, SQL and Unix timestamp in milliseconds. To parse other formats, use [`DateTime.fromFormat()`](%E2%80%9Dhttps://moment.github.io/luxon/api-docs/index.html#datetimefromformat%E2%80%9D). - [*`String`*.**`toJsonString()`**](string/#stringtojsonstring) Prepares the string to be inserted into a JSON object. Escapes any quotes and special characters (e.g. new lines), and wraps the string in quotes. The same as JavaScript’s `JSON.stringify()`. - [*`String`*.**`toLowerCase()`**](string/#stringtolowercase) Converts all letters in the string to lower case - [*`String`*.**`toNumber()`**](string/#stringtonumber) Converts a string representing a number to a number. Throws an error if the string doesn’t start with a valid number. - [*`String`*.**`toSentenceCase()`**](string/#stringtosentencecase) Changes the capitalization of the string to sentence case. The first letter of each sentence is capitalized and all others are lowercased. - [*`String`*.**`toSnakeCase()`**](string/#stringtosnakecase) Changes the format of the string to snake case. Spaces and dashes are replaced by `_`, symbols are removed and all letters are lowercased. - [*`String`*.**`toTitleCase()`**](string/#stringtotitlecase) Changes the capitalization of the string to title case. The first letter of each word is capitalized and the others left unchanged. Short prepositions and conjunctions aren’t capitalized (e.g. ‘a’, ‘the’). - [*`String`*.**`toUpperCase()`**](string/#stringtouppercase) Converts all letters in the string to upper case (capitals) - [*`String`*.**`trim()`**](string/#stringtrim) Removes whitespace from both ends of the string. Whitespace includes new lines, tabs, spaces, etc. - [*`String`*.**`urlDecode(allChars?)`**](string/#stringurldecode) Decodes a URL-encoded string. Replaces any character codes in the form of `%XX` with their corresponding characters. - [*`String`*.**`urlEncode(allChars?)`**](string/#stringurlencode) Encodes the string so that it can be used in a URL. Spaces and special characters are replaced with codes of the form `%XX`. ## WorkflowData - [`$workflow`.**`active`**](workflowdata/#workflowactive) Whether the workflow is active - [`$workflow`.**`id`**](workflowdata/#workflowid) The workflow ID. Can also be found in the workflow’s URL. - [`$workflow`.**`name`**](workflowdata/#workflowname) The name of the workflow, as shown at the top of the editor # Array ## *`Array`*.**`append()`** **Description:** Adds new elements to the end of the array. Similar to `push()`, but returns the modified array. Consider using spread syntax instead (see examples). **Syntax:** *`Array`*.append(elem1, elem2?, ..., elemN?) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `elem1` (any) - The first element to append - `elem2` (any) - optional - The second element to append - `elemN` (any) - optional - The Nth element to append **Examples:** ``` // arr = ['forget', 'me'] arr.append('not') //=> arr = ['forget', 'me', 'not'] ``` ``` // arr = [9, 0, 2] arr.append(1, 0) //=> [9, 0, 2, 1, 0] // Consider using spread syntax instead [...arr, 1, 0] //=> [9, 0, 2, 1, 0] ``` ## *`Array`*.**`average()`** **Description:** Returns the average of the numbers in the array. Throws an error if there are any non-numbers. **Syntax:** *`Array`*.average() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // arr = [12, 1, 5] arr.average() //=> 6 ``` ## *`Array`*.**`chunk()`** **Description:** Splits the array into an array of sub-arrays, each with the given length **Syntax:** *`Array`*.chunk(length) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `length` (Number) - The number of elements in each chunk **Examples:** ``` // arr = [1, 2, 3, 4, 5, 6] arr.chunk(2) //=> [ [1,2], [3,4], [5,6] ] ``` ## *`Array`*.**`compact()`** **Description:** Removes any empty values from the array. `null`, `""` and `undefined` count as empty. **Syntax:** *`Array`*.compact() **Returns:** Array **Source:** Custom n8n functionality **Examples:** ``` // arr = [2, null, 1, ""] arr.compact() //=> [2, 1] ``` ## *`Array`*.**`concat()`** **Description:** Joins one or more arrays onto the end of the base array **Syntax:** *`Array`*.concat(array2, array3?, ... arrayN?) **Returns:** Array **Source:** JavaScript function **Parameters:** - `array2` (Array) - The first array to be joined on the end of the base array - `array3` (Array) - optional - The second array to be joined on to the end of the base array - `arrayN` (Array) - optional - The Nth array to be joined on to the end of the base array **Examples:** ``` // arr1 = ['Nathan', 'Jan'] arr1.concat(['Steve', 'Bill']) // ['Nathan', 'Jan', 'Steve', 'Bill'] ``` ``` // arr1 = [5, 4] // arr2 = [100, 101] // arr3 = ['a', 'b'] arr1.concat(arr2, arr3) // [5, 4, 100, 101, 'a', 'b'] ``` ## *`Array`*.**`difference()`** **Description:** Compares two arrays. Returns all elements in the base array that aren't present in `otherArray`. **Syntax:** *`Array`*.difference(otherArray) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `otherArray` (Array) - The array to compare to the base array **Examples:** ``` // arr = [1, 2, 3] arr.difference([2, 3]) //=> [1] ``` ## *`Array`*.**`filter()`** **Description:** Returns an array with only the elements satisfying a condition. The condition is a function that returns `true` or `false`. **Syntax:** *`Array`*.filter(function(element, index?, array?), thisValue?) **Returns:** Array **Source:** JavaScript function **Parameters:** - `function()` (function) - A function to run for each array element. If it returns `true`, the element will be kept. Consider using [arrow function notation](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions%E2%80%9D) to save space. - `element` (any) - The value of the current element - `index` (Number) - optional - The position of the current element in the array (starting at 0) - `array` (Array) - optional - The array being processed. Rarely needed. - `thisValue` (any) - optional - A value passed to the function as its `this` value. Rarely needed. **Examples:** ``` // Keep ages over 18 (using arrow function notation): // ages = [12, 33, 16, 40] ages.filter(age => (age > 18)) //=> [33, 40] ``` ``` // Keep names under 5 letters long (using arrow function notation): // names = ['Nathan', 'Bob', 'Sebastian'] ages.filter(age => (age.length < 5)) //=> ["Bob"] // Or using traditional function notation: ages.filter(function(age){return age.length < 5}) //=> ["Bob"] ``` ``` // Keep numbers at odd indexes // nums = [1, 7, 3, 10, 5] ages.filter((num, index) => {return index%2 != 0}) //=> [7, 10] ``` ## *`Array`*.**`find()`** **Description:** Returns the first element from the array that satisfies the provided condition. The condition is a function that returns `true` or `false`. Returns `undefined` if no matches are found. If you need all matching elements, use `filter()`. **Syntax:** *`Array`*.find(function(element, index?, array?), thisValue?) **Returns:** any **Source:** JavaScript function **Parameters:** - `function()` (function) - A function to run for each array element. As soon as it returns `true`, that element will be returned. Consider using [arrow function notation](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions%E2%80%9D) to save space. - `element` (any) - The value of the current element - `index` (Number) - optional - The position of the current element in the array (starting at 0) - `array` (Array) - optional - The array of the current element. Rarely needed. - `thisValue` (any) - optional - A value passed to the function as its `this` value. Rarely needed. **Examples:** ``` // Find first age over 18 (using arrow function notation): // ages = [12, 33, 16, 40] ages.find(age => (age > 18)) //=> 33 ``` ``` // Find first name under 5 letters long (using arrow function notation): // names = ['Nathan', 'Bob', 'Sebastian'] ages.find(age => (age.length < 5)) //=> 'Bob' // Or using traditional function notation: ages.find(function(age){return age.length < 5}) //=> 'Bob' ``` ## *`Array`*.**`first()`** **Description:** Returns the first element of the array **Syntax:** *`Array`*.first() **Returns:** any **Source:** Custom n8n functionality **Examples:** ``` // arr = ['quick', 'brown', 'fox'] arr.first() //=> 'quick' ``` ## *`Array`*.**`includes()`** **Description:** Returns `true` if the array contains the specified element **Syntax:** *`Array`*.includes(element, start?) **Returns:** Boolean **Source:** JavaScript function **Parameters:** - `element` (any) - The value to search the array for - `start` (Number) - optional - The index to start looking from **Examples:** ``` // names = ["Bob", "Bill", "Nat"]; names.includes("Nat") //=> true names.includes("Nathan") //=> false ``` ## *`Array`*.**`indexOf()`** **Description:** Returns the position of the first matching element in the array, or -1 if the element isn’t found. Positions start at 0. **Syntax:** *`Array`*.indexOf(element, start?) **Returns:** Number **Source:** JavaScript function **Parameters:** - `element` (any) - The value to look for - `start` (Number) - optional - The index to start looking from **Examples:** ``` // names = ["Bob", "Bill", "Nat"]; names.indexOf("Nat") //=> 2 ``` ``` // names = ["Bob", "Bill", "Nat"]; names.indexOf("Nathan") //=> -1 ``` ## *`Array`*.**`intersection()`** **Description:** Compares two arrays. Returns all elements in the base array that are also present in the other array. **Syntax:** *`Array`*.intersection(otherArray) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `otherArray` (Array) - The array to compare to the base array **Examples:** ``` // arr = [1, 2] arr.intersection([2, 3]) //=> [2] ``` ## *`Array`*.**`isEmpty()`** **Description:** Returns `true` if the array has no elements or is `null` **Syntax:** *`Array`*.isEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // arr = [] arr.isEmpty() //=> true ``` ``` // arr = ['quick', 'brown', 'fox'] arr.isEmpty() //=> false ``` ## *`Array`*.**`isNotEmpty()`** **Description:** Returns `true` if the array has at least one element **Syntax:** *`Array`*.isNotEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // arr = ['quick', 'brown', 'fox'] arr.isNotEmpty() //=> true ``` ``` // arr = [] arr.isNotEmpty() //=> false ``` ## *`Array`*.**`join()`** **Description:** Merges all elements of the array into a single string, with an optional separator between each element. The opposite of `split()`. **Syntax:** *`Array`*.join(separator?) **Returns:** String **Source:** JavaScript function **Parameters:** - `separator` (String) - optional - The character(s) to insert between each element **Examples:** ``` // arr = ['Wind', 'Water', 'Fire'] a.join(" + ") //=> 'Wind + Water + Fire' ``` ``` // arr = ['Wind', 'Water', 'Fire'] a.join() //=> 'Wind,Water,Fire' a.join("") //=> 'WindWaterFire' ``` ## *`Array`*.**`last()`** **Description:** Returns the last element of the array **Syntax:** *`Array`*.last() **Returns:** any **Source:** Custom n8n functionality **Examples:** ``` // arr = ['quick', 'brown', 'fox'] arr.last() //=> 'fox' ``` ## *`Array`*.**`length`** **Description:** The number of elements in the array **Syntax:** *`Array`*.length **Returns:** Number **Source:** JavaScript function **Examples:** ``` // names = ["Bob", "Bill", "Nat"]; names.length //=> 3 ``` ## *`Array`*.**`map()`** **Description:** Creates a new array by applying a function to each element of the original array **Syntax:** *`Array`*.map(function(element, index?, array?), thisValue?) **Returns:** Array **Source:** JavaScript function **Parameters:** - `function()` (function) - A function to run for each array element. In the new array, the output of this function takes the place of the element. Consider using [arrow function notation](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions%E2%80%9D) to save space. - `element` (any) - The value of the current element - `index` (Number) - optional - The position of the current element in the array (starting at 0) - `array` (Array) - optional - The array of the current element. Rarely needed. - `thisValue` (any) - optional - A value passed to the function as its `this` value. Rarely needed. **Examples:** ``` // Double all numbers (using arrow function notation): // nums = [12, 33, 16] nums.map(num => num*2) //=> [24, 66, 32] ``` ``` // Convert elements to uppercase (using arrow function notation): // words = ['hello', 'old', 'chap'] words.map(word => word.toUpperCase()) //=> ['HELLO', 'OLD', 'CHAP']] // Or using traditional function notation: words.map(function(word){return word.toUpperCase()}) //=> ['HELLO', 'OLD', 'CHAP']] ``` ## *`Array`*.**`max()`** **Description:** Returns the largest number in the array. Throws an error if there are any non-numbers. **Syntax:** *`Array`*.max() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // arr = [1, 12, 5] arr.max() //=> 12 ``` ## *`Array`*.**`min()`** **Description:** Returns the smallest number in the array. Throws an error if there are any non-numbers. **Syntax:** *`Array`*.min() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // arr = [12, 1, 5] arr.min() //=> 1 ``` ## *`Array`*.**`pluck()`** **Description:** Returns an array containing the values of the given field(s) in each Object of the array. Ignores any array elements that aren’t Objects or don’t have a key matching the field name(s) provided. **Syntax:** *`Array`*.pluck(fieldName1?, fieldName2?, …) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `fieldName1` (String) - optional - The first key to retrieve the value of - `fieldName2` (String) - optional - The second key to retrieve the value of **Examples:** ``` // arr = [{'name':'Nathan','age':42},{'name':'Jan','city':'Berlin'}] arr.pluck('name') //=> ["Nathan", "Jan"] ``` ``` // arr = [{'name':'Nathan','age':42},{'name':'Jan','city':'Berlin'}] arr.pluck('age') //=> [42] ``` ## *`Array`*.**`randomItem()`** **Description:** Returns a randomly-chosen element from the array **Syntax:** *`Array`*.randomItem() **Returns:** any **Source:** Custom n8n functionality **Examples:** ``` // arr = ['quick', 'brown', 'fox'] arr.randomItem() //=> 'brown' arr.randomItem() //=> 'quick' ``` ## *`Array`*.**`reduce()`** **Description:** Reduces an array to a single value by applying a function to each element. The function combines the current element with the result of reducing the previous elements, producing a new result. **Syntax:** *`Array`*.reduce(function(prevResult, currentElem, currentIndex?, array?), initResult) **Source:** JavaScript function **Parameters:** - `function()` (function) - A function to run for each array element. Takes the accumulated result and the current element, and returns a new accumulated result. Consider using [arrow function notation](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Functions/Arrow_functions%E2%80%9D) to save space. - `prevResult` (any) - The accumulated result from applying the function to previous elements. When processing the first element, it’s set to `initResult` (or the first array element if not specified). - `currentElem` (any) - The value in the array currently being processed - `currentIndex` (Number) - optional - The position of the current element in the array (starting at 0) - `array` (Array) - optional - The array being processed. Rarely needed. - `initResult` (any) - optional - The initial value of the prevResult, used when calling the function on the first array element. When not specified it’s set to the first array element, and the first function call is on the second array element instead of the first. **Examples:** ``` // Sum numbers (using arrow function notation): // nums = [12, 33, 16] nums.reduce((result, num) => (result+num), 0) //=> 61 ``` ``` // Join letters and uppercase (using arrow function notation): // chars = ['a', 'b', 'c'] chars.reduce((result, char) => (result+char.toUpperCase()), '') //=> 'ABC' // Or using traditional function notation: chars.reduce(function(result, char){return result+char.toUpperCase()}, '') //=> 'ABC' ``` ## *`Array`*.**`removeDuplicates()`** **Description:** Removes any re-occurring elements from the array **Syntax:** *`Array`*.removeDuplicates(keys?) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `keys` (String) - optional - For use on arrays of Objects. A key, or comma-separated list of keys to restrict the check to. If omitted, all keys are checked. **Examples:** ``` // arr = ['quick', 'brown', 'quick'] arr.removeDuplicates() //=> ['quick', 'brown'] ``` ## *`Array`*.**`renameKeys()`** **Description:** Changes all matching keys (field names) of any Objects in the array. Rename more than one key by adding extra arguments, i.e. `from1, to1, from2, to2, ...`. **Syntax:** *`Array`*.renameKeys(from, to) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `from` (String) - The key to rename - `to` (String) - The new key name **Examples:** ``` // arr = [{'name':'bob'},{'name':'meg'}] arr.renameKeys('name', 'x') //=> [{"x": "bob"},{"x": "meg"}]] ``` ## *`Array`*.**`reverse()`** **Description:** Reverses the order of the elements in the array **Syntax:** *`Array`*.reverse() **Returns:** Array **Source:** JavaScript function **Examples:** ``` // arr = ['dog', 'bites', 'man'] arr.reverse() //=> ['man', 'bites', 'dog'] ``` ## *`Array`*.**`slice()`** **Description:** Returns a portion of the array, from the `start` index up to (but not including) the `end` index. Indexes start at 0. **Syntax:** *`Array`*.slice(start, end) **Returns:** Array **Source:** JavaScript function **Parameters:** - `start` (Number) - optional - The position to start from. Positions start at 0. Negative numbers count back from the end of the array. - `end` (Number) - optional - The position to select up to. The element at the end position is not included. Negative numbers select from the end of the array. If omitted, will extract to the end of the array. **Examples:** ``` // arr = [1, 2, 3, 4, 5] arr.slice(2, 4) //=> [3, 4] ``` ``` // arr = [1, 2, 3, 4, 5] arr.slice(2) //=> [3, 4, 5] ``` ``` // arr = [1, 2, 3, 4, 5] arr.slice(-2) //=> [4, 5] ``` ## *`Array`*.**`smartJoin()`** **Description:** Creates a single Object from an array of Objects. Each Object in the array provides one field for the returned Object. Each Object in the array must contain a field with the key name and a field with the value. **Syntax:** *`Array`*.smartJoin(keyField, nameField) **Returns:** Object **Source:** Custom n8n functionality **Parameters:** - `keyField` (String) - The field in each Object containing the key name - `nameField` (String) - The field in each Object containing the value **Examples:** ``` // arr => [{'field':'age','value':2},{'field':'city','value':'Berlin'}] arr.smartJoin('field','value') //=> {"age": 2, "city": "Berlin"} ``` ## *`Array`*.**`sort()`** **Description:** Reorders the elements of the array. For sorting strings alphabetically, no parameter is required. For sorting numbers or Objects, see examples. **Syntax:** *`Array`*.sort(compareFunction(a, b)?) **Returns:** Array **Source:** JavaScript function **Parameters:** - `compareFunction` (function) - optional - A function to compare two array elements and return a number indicating which one comes first: **Return < 0**: `a` comes before `b` **Return 0**: `a` and `b` are equal (leave order unchanged) **Return > 0**: `b` comes before `a` If no function is specified, converts all values to strings and compares their character codes. * `a` (any) - The first element to compare in the function * `b` (any) - The second element to compare in the function **Examples:** ``` // No need for a param when sorting strings // arr = ['d', 'a', 'c', 'b'] arr.sort() //=> ['a', 'b', 'c', 'd'] ``` ``` // To sort numbers, you must use a function // arr = [4, 2, 1, 3] arr.sort((a, b) => (a - b)) //=> [1, 2, 3, 4] // Or using traditional function notation: arr.sort(function(a, b){return a - b}) //=> [1, 2, 3, 4] ``` ``` // Sort in reverse alphabetical order // arr = ['d', 'a', 'c', 'b'] arr.sort((a, b) => b.localeCompare(a)) //=> ['d', 'c', 'b', 'a'] ``` ``` // Sort array of objects by a property // arr = [{name:'Zak'}, {name:'Abe'}, {name:'Bob'}] arr.sort((a, b) => a.name.localeCompare(b.name)) //=> [{name:'Abe'}, {name:'Bob'}, {name:'Zak'}] ``` ## *`Array`*.**`sum()`** **Description:** Returns the total of all the numbers in the array. Throws an error if there are any non-numbers. **Syntax:** *`Array`*.sum() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // arr = [12, 1, 5] arr.sum() //=> 18 ``` ## *`Array`*.**`toJsonString()`** **Description:** Converts the array to a JSON string. The same as JavaScript’s `JSON.stringify()`. **Syntax:** *`Array`*.toJsonString() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` // obj = ['quick', 'brown', 'fox'] obj.toJsonString() //=> '["quick","brown","fox"]' ``` ## *`Array`*.**`toSpliced()`** **Description:** Adds and/or removes array elements at a given position. See also `slice()` and `append()`. **Syntax:** *`Array`*.toSpliced(start, deleteCount, elem1, ....., elemN) **Returns:** Array **Source:** JavaScript function **Parameters:** - `start` (Number) - The index (position) to add or remove elements at. New elements are inserted before the element at this index. A negative index counts back from the end of the array. - `deleteCount` (Number) - optional - The number of elements to remove. If omitted, removes all elements from the `start` index onwards. - `elem1` (any) - optional - The first new element to be added - `elem2` (any) - optional - The second new element to be added - `elemN` (any) - optional - The Nth new element to be added **Examples:** ``` // Insert element at index 1 // months = ['Jan', 'Mar'] months.toSpliced(1, 0, "Feb") // ['Jan', 'Feb', 'Mar'] ``` ``` // Delete 2 elements starting at index 1 // arr = ["don't", "make", "me", "do", "this"] arr.toSpliced(1, 2) // ["don't", "do", "this"] ``` ``` // Replace 2 elements starting at index 1 // arr = ["don't", "be", "evil"] arr.toSpliced(1, 2, 'eat', 'slugs') // ["don't", "eat", "slugs"] ``` ## *`Array`*.**`toString()`** **Description:** Converts the array to a string, with values separated by commas. To use a different separator, use `join()` instead. **Syntax:** *`Array`*.toString() **Returns:** String **Source:** JavaScript function **Examples:** ``` // words = ['make', 'my', 'day'] words.toString() //=> 'make,my,day' ``` ## *`Array`*.**`union()`** **Description:** Concatenates two arrays and then removes any duplicates **Syntax:** *`Array`*.union(otherArray) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `otherArray` (Array) - The array to union with the base array **Examples:** ``` // arr = [1, 2] arr.union([2, 3]) //=> [1, 2, 3] ``` ## *`Array`*.**`unique()`** **Description:** Removes any duplicate elements from the array **Syntax:** *`Array`*.unique() **Returns:** Array **Source:** Custom n8n functionality **Examples:** ``` // arr = ['quick', 'brown', 'quick'] arr.unique() //=> ['quick', 'brown'] ``` # BinaryFile ## `binaryFile`.**`directory`** **Description:** The path to the directory that the file is stored in. Useful for distinguishing between files with the same name in different directories. Not set if n8n is configured to store files in its database. **Syntax:** `binaryFile`.**`directory`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`fileExtension`** **Description:** The suffix attached to the filename (e.g. `txt`) **Syntax:** `binaryFile`.**`fileExtension`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`fileName`** **Description:** The name of the file, including extension **Syntax:** `binaryFile`.**`fileName`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`fileSize`** **Description:** A string representing the size of the file **Syntax:** `binaryFile`.**`fileSize`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`fileType`** **Description:** A string representing the type of the file, e.g. `image`. Corresponds to the first part of the MIME type. **Syntax:** `binaryFile`.**`fileType`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`id`** **Description:** The unique ID of the file. Used to identify the file when it is stored on disk or in a storage service such as S3. **Syntax:** `binaryFile`.**`id`** **Returns:** String **Source:** Custom n8n functionality ## `binaryFile`.**`mimeType`** **Description:** A string representing the format of the file’s contents, e.g. `image/jpeg` **Syntax:** `binaryFile`.**`mimeType`** **Returns:** String **Source:** Custom n8n functionality # Boolean ## *`Boolean`*.**`isEmpty()`** **Description:** Returns `false` for all booleans. Returns `true` for `null`. **Syntax:** *`Boolean`*.isEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // bool = true bool.isEmpty() // => false ``` ``` // bool = false bool.isEmpty() // => false ``` ``` // bool = null bool.isEmpty() // => true ``` ## *`Boolean`*.**`toNumber()`** **Description:** Converts `true` to 1 and `false` to 0 **Syntax:** *`Boolean`*.toNumber() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` true.toNumber() //=> 1 ``` ``` false.toNumber() //=> 0 ``` ## *`Boolean`*.**`toString()`** **Description:** Converts `true` to the string ‘true’ and `false` to the string ‘false’ **Syntax:** *`Boolean`*.toString() **Returns:** String **Source:** JavaScript function **Examples:** ``` // bool = true bool.toString() //=> 'true' ``` ``` // bool = false bool.toString() //=> 'false' ``` # CustomData ## `$execution.customData`.**`get()`** **Description:** Returns the custom execution data stored under the given key. [More info](/workflows/executions/custom-executions-data/) **Syntax:** `$execution.customData`.get(key) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `key` (String) - The key (identifier) under which the data is stored **Examples:** ``` // Get the user's email (which was previously stored) $execution.customData.get("user_email") //=> "me@example.com" ``` ## `$execution.customData`.**`getAll()`** **Description:** Returns all the key-value pairs of custom data that have been set in the current execution. [More info](/workflows/executions/custom-executions-data/) **Syntax:** `$execution.customData`.getAll() **Returns:** Object **Source:** Custom n8n functionality **Examples:** ``` $execution.customData.getAll() //=> {"user_email": "me@example.com", "id": 1234} ``` ## `$execution.customData`.**`set()`** **Description:** Stores custom execution data under the key specified. Use this to easily filter executions by this data. [More info](/workflows/executions/custom-executions-data/) **Syntax:** `$execution.customData`.set(key, value) **Source:** Custom n8n functionality **Parameters:** - `key` (String) - The key (identifier) under which the data is stored - `value` (String) - The data to store **Examples:** ``` // Store the user's email, to easily retrieve all execs related to that user later $execution.customData.set("user_email", "me@example.com") ``` ## `$execution.customData`.**`setAll()`** **Description:** Sets multiple key-value pairs of custom data for the execution. Use this to easily filter executions by this data. [More info](/workflows/executions/custom-executions-data/) **Syntax:** `$execution.customData`.setAll(obj) **Source:** Custom n8n functionality **Parameters:** - `obj` (Object) - A JavaScript object containing key-value pairs of the data to set **Examples:** ``` $execution.customData.setAll({"user_email": "me@example.com", "id": 1234}) ``` # Date ## *`Date`*.**`toDateTime()`** **Description:** Converts a JavaScript Date to a Luxon DateTime. The DateTime contains the same information, but is easier to manipulate. **Syntax:** *`Date`*.toDateTime() **Returns:** DateTime **Source:** Custom n8n functionality **Examples:** ``` // date = new Date("2024-03-30T18:49") date.toDateTime().plus(5, 'days') //=> 2024-05-05T18:49 ``` # DateTime ## *`DateTime`*.**`day`** **Description:** The day of the month (1-31) **Syntax:** *`DateTime`*.day **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.day //=> 30 ``` ## *`DateTime`*.**`diffTo()`** **Description:** Returns the difference between two DateTimes, in the given unit(s) **Syntax:** *`DateTime`*.diffTo(otherDateTime, unit) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `otherDateTime` (String|DateTime) - The moment to subtract the base DateTime from. Can be an ISO date string or a Luxon DateTime. - `unit` (String|Array) - optional - The unit, or array of units, to return the result in. Possible values: `years`, `months`, `weeks`, `days`, `hours`, `minutes`, `seconds`, `milliseconds`. **Examples:** ``` // dt1 = "2024-03-30T18:49:07.234".toDateTime() dt1.diffTo('2025-01-01', 'days') //=> 276.21 ``` ``` // dt1 = "2024-03-30T18:49:07.234".toDateTime() // dt2 = "2025-01-01T00:00:00.000".toDateTime() dt1.diffTo(dt2, ['months', 'days']) //=> {'months':, 'days':} ``` ``` Note: should support both day and days, etc. ``` ## *`DateTime`*.**`diffToNow()`** **Description:** Returns the difference between the current moment and the DateTime, in the given unit(s). For a textual representation, use `toRelative()` instead. **Syntax:** *`DateTime`*.diffToNow(unit) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `unit` (String|Array) - optional - The unit, or array of units, to return the result in. Possible values: `years`, `months`, `weeks`, `days`, `hours`, `minutes`, `seconds`, `milliseconds`. **Examples:** ``` // dt = "2023-03-30T18:49:07.234".toDateTime() dt.diffToNow('days') //=> 371.9 ``` ``` // dt = "2023-03-30T18:49:07.234".toDateTime() dt.diffToNow(['months', 'days']) //=> {"months":12, "days":5.9} ``` ``` Note: should support both day and days, etc. ``` ## *`DateTime`*.**`endOf()`** **Description:** Rounds the DateTime up to the end of one of its units, e.g. the end of the month **Syntax:** *`DateTime`*.endOf(unit, opts) **Returns:** DateTime **Type:** Luxon **Parameters:** - `unit` (String) - The unit to round to the end of. Can be `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second`, or `millisecond`. - `opts` (Object) - optional - Object with options that affect the output. Possible properties: `useLocaleWeeks` (boolean): Whether to use the locale when calculating the start of the week. Defaults to false. **Examples:** ``` // dt = "2024-03-20T18:49".toDateTime() dt.endOf('month') //=> 2024-03-31T23:59 ``` ## *`DateTime`*.**`equals()`** **Description:** Returns `true` if the two DateTimes represent exactly the same moment and are in the same time zone. For a less strict comparison, use `hasSame()`. **Syntax:** *`DateTime`*.equals(other) **Returns:** Boolean **Type:** Luxon **Parameters:** - `other` (DateTime) - The other DateTime to compare **Examples:** ``` // dt1 = "2024-03-20T18:49+01:00".toDateTime() // dt2 = "2024-03-20T19:49+02:00".toDateTime() dt1.equals(dt2) //=> false ``` ## *`DateTime`*.**`extract()`** **Description:** Extracts a part of the date or time, e.g. the month, as a number. To extract textual names instead, see `format()`. **Syntax:** *`DateTime`*.extract(unit?) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `unit` (String) - optional - The part of the date or time to return. One of: `year`, `month`, `week`, `day`, `hour`, `minute`, `second` **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.extract('month') //=> 3 ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.extract('hour') //=> 18 ``` ## *`DateTime`*.**`format()`** **Description:** Converts the DateTime to a string, using the format specified. [Formatting guide](https://moment.github.io/luxon/#/formatting?id=table-of-tokens). For common formats, `toLocaleString()` may be easier. **Syntax:** *`DateTime`*.format(fmt) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `fmt` (String) - The [format](https://moment.github.io/luxon/#/formatting?id=table-of-tokens) of the string to return **Examples:** ``` // dt = "2024-04-30T18:49".toDateTime() dt.format('dd/LL/yyyy') //=> '30/04/2024' ``` ``` // dt = "2024-04-30T18:49".toDateTime() dt.format('dd LLL yy') //=> '30 Apr 24' dt.setLocale('fr').format('dd LLL yyyy') //=> '30 avr. 2024' dt.format("HH 'hours and' mm 'minutes'") //=> '18 hours and 49 minutes' ``` ## *`DateTime`*.**`hasSame()`** **Description:** Returns `true` if the two DateTimes are the same, down to the unit specified. Time zones are ignored (only local times are compared), so use `toUTC()` first if needed. **Syntax:** *`DateTime`*.hasSame(otherDateTime, unit) **Returns:** Boolean **Type:** Luxon **Parameters:** - `otherDateTime` (DateTime) - The other DateTime to compare - `unit` (String) - The unit of time to check sameness down to. One of `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second`, or `millisecond`. **Examples:** ``` // dt1 = "2024-03-20".toDateTime() // dt2 = "2024-03-18".toDateTime() dt1.hasSame(dt2, 'month') //=> true ``` ``` // dt1 = "1982-03-20".toDateTime() // dt2 = "2024-03-18".toDateTime() dt1.hasSame(dt2, 'month') //=> false ``` ## *`DateTime`*.**`hour`** **Description:** The hour of the day (0-23) **Syntax:** *`DateTime`*.hour **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.hour //=> 18 ``` ## *`DateTime`*.**`isBetween()`** **Description:** Returns `true` if the DateTime lies between the two moments specified **Syntax:** *`DateTime`*.isBetween(date1, date2) **Returns:** Boolean **Source:** Custom n8n functionality **Parameters:** - `date1` (String|DateTime) - The moment that the base DateTime must be after. Can be an ISO date string or a Luxon DateTime. - `date2` (String|DateTime) - The moment that the base DateTime must be before. Can be an ISO date string or a Luxon DateTime. **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.isBetween('2020-06-01', '2025-06-01') //=> true ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.isBetween('2020', $now) //=> true ``` ## *`DateTime`*.**`isInDST`** **Description:** Whether the DateTime is in daylight saving time **Syntax:** *`DateTime`*.isInDST **Returns:** Boolean **Type:** Luxon ## *`DateTime`*.**`locale`** **Description:** The locale of a DateTime, such 'en-GB'. The locale is used when formatting the DateTime. **Syntax:** *`DateTime`*.locale **Returns:** String **Type:** Luxon **Examples:** ``` $now.locale //=> 'en-US' ``` ## *`DateTime`*.**`millisecond`** **Description:** The millisecond of the second (0-999) **Syntax:** *`DateTime`*.millisecond **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49:07.234".toDateTime() dt.millisecond //=> 234 ``` ## *`DateTime`*.**`minus()`** **Description:** Subtracts a given period of time from the DateTime **Syntax:** *`DateTime`*.minus(n, unit?) **Returns:** DateTime **Source:** Custom n8n functionality **Parameters:** - `n` (Number|Object) - The number of units to subtract. Or use a Luxon [Duration](%E2%80%9Dhttps://moment.github.io/luxon/api-docs/index.html#duration%E2%80%9D) object to subtract multiple units at once. - `unit` (String) - optional - The units of the number. One of: `years`, `months`, `weeks`, `days`, `hours`, `minutes`, `seconds`, `milliseconds` **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.minus(7, 'days') //=> 2024-04-23T18:49 ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.minus(4, 'years') //=> 2020-04-30T18:49 ``` ## *`DateTime`*.**`minute`** **Description:** The minute of the hour (0-59) **Syntax:** *`DateTime`*.minute **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.minute //=> 49 ``` ## *`DateTime`*.**`month`** **Description:** The month (1-12) **Syntax:** *`DateTime`*.month **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.month //=> 3 ``` ## *`DateTime`*.**`monthLong`** **Description:** The textual long month name, e.g. 'October'. Defaults to the system's locale if no locale has been specified. **Syntax:** *`DateTime`*.monthLong **Returns:** String **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.monthLong //=> 'March' ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.setLocale('de-DE').monthLong //=> 'März' ``` ## *`DateTime`*.**`monthShort`** **Description:** The textual abbreviated month name, e.g. 'Oct'. Defaults to the system's locale if no locale has been specified. **Syntax:** *`DateTime`*.monthShort **Returns:** String **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.monthShort //=> 'Mar' ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.setLocale('de-DE').monthShort //=> 'Mär' ``` ## *`DateTime`*.**`plus()`** **Description:** Adds a given period of time to the DateTime **Syntax:** *`DateTime`*.plus(n, unit?) **Returns:** DateTime **Source:** Custom n8n functionality **Parameters:** - `n` (Number|Object) - The number of units to add. Or use a Luxon [Duration](%E2%80%9Dhttps://moment.github.io/luxon/api-docs/index.html#duration%E2%80%9D) object to add multiple units at once. - `unit` (String) - optional - The units of the number. One of: `years`, `months`, `weeks`, `days`, `hours`, `minutes`, `seconds`, `milliseconds` **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.plus(7, 'days') //=> 2024-05-07T18:49 ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.plus(4, 'years') //=> 2028-04-30T18:49 ``` ## *`DateTime`*.**`quarter`** **Description:** The quarter of the year (1-4) **Syntax:** *`DateTime`*.quarter **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.quarter //=> 1 ``` ## *`DateTime`*.**`second`** **Description:** The second of the minute (0-59) **Syntax:** *`DateTime`*.second **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49:07.234".toDateTime() dt.second //=> 7 ``` ## *`DateTime`*.**`set()`** **Description:** Assigns new values to specified units of the DateTime. To round a DateTime, see also `startOf()` and `endOf()`. **Syntax:** *`DateTime`*.set(values) **Returns:** DateTime **Type:** Luxon **Parameters:** - `values` (Object) - An object containing the units to set and corresponding values to assign. Possible keys are `year`, `month`, `day`, `hour`, `minute`, `second` and `millsecond`. **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.set({year:1982, month:10}) //=> 1982-10-20T18:49 ``` ## *`DateTime`*.**`setLocale()`** **Description:** Sets the locale, which determines the language and formatting for the DateTime. Useful when generating a textual representation of the DateTime, e.g. with `format()` or `toLocaleString()`. **Syntax:** *`DateTime`*.setLocale(locale) **Returns:** DateTime **Type:** Luxon **Parameters:** - `locale` (String) - The locale to assign, e.g. ‘en-GB’ for British English or ‘pt-BR’ for Brazilian Portuguese. [List](%E2%80%9Dhttps://www.localeplanet.com/icu/%E2%80%9D) (unofficial) **Examples:** ``` $now.setLocale('de-DE').toLocaleString({'dateStyle':'long'}) //=> 5. Oktober 2024 ``` ``` $now.setLocale('fr-FR').toLocaleString({'dateStyle':'long'}) //=> 5 octobre 2024 ``` ## *`DateTime`*.**`setZone()`** **Description:** Converts the DateTime to the given time zone. The DateTime still represents the same moment unless specified in the options. See also `toLocal()` and `toUTC()`. **Syntax:** *`DateTime`*.setZone(zone, opts) **Returns:** DateTime **Type:** Luxon **Parameters:** - `zone` (String) - optional - A zone identifier, either in the format ‘America/New_York’, 'UTC+3', or the strings 'local' or 'utc' - `opts` (Object) - optional - Options that affect the output. Possible properties: `keepCalendarTime` (boolean): Whether to keep the time the same and only change the offset. Defaults to false. **Examples:** ``` // dt = "2024-01-01T00:00:00.000+02:00".toDateTime() dt.setZone('America/Buenos_aires') //=> 2023-12-31T19:00:00.000-03:00 ``` ``` // dt = "2024-01-01T00:00:00.000+02:00".toDateTime() dt.setZone('UTC+7') //=> 2024-01-01T05:00:00.000+07:00 ``` ## *`DateTime`*.**`startOf()`** **Description:** Rounds the DateTime down to the beginning of one of its units, e.g. the start of the month **Syntax:** *`DateTime`*.startOf(unit, opts) **Returns:** DateTime **Type:** Luxon **Parameters:** - `unit` (String) - The unit to round to the beginning of. One of `year`, `quarter`, `month`, `week`, `day`, `hour`, `minute`, `second`, or `millisecond`. - `opts` (Object) - optional - Object with options that affect the output. Possible properties: `useLocaleWeeks` (boolean): Whether to use the locale when calculating the start of the week. Defaults to false. **Examples:** ``` // dt = "2024-03-20T18:49".toDateTime() dt.startOf('month') //=> 2024-03-01T00:00 ``` ## *`DateTime`*.**`toISO()`** **Description:** Returns an ISO 8601-compliant string representation of the DateTime **Syntax:** *`DateTime`*.toISO(opts) **Returns:** String **Type:** Luxon **Parameters:** - `opts` (Object) - optional - Configuration options. See [Luxon docs](%E2%80%9Dhttps://moment.github.io/luxon/api-docs/index.html#datetimetoiso%E2%80%9D) for more info. **Examples:** ``` $now.toISO() //=> 2024-04-05T18:44:55.525+02:00 ``` ## *`DateTime`*.**`toLocal()`** **Description:** Converts a DateTime to the workflow’s local time zone. The DateTime still represents the same moment unless specified in the parameters. The workflow’s time zone can be set in the workflow settings. **Syntax:** *`DateTime`*.toLocal() **Returns:** DateTime **Type:** Luxon **Examples:** ``` // dt = "2024-01-01T00:00:00.000Z".toDateTime() dt.toLocal() //=> 2024-01-01T01:00:00.000+01:00, if time zone is Europe/Berlin ``` ## *`DateTime`*.**`toLocaleString()`** **Description:** Returns a localised string representing the DateTime, i.e. in the language and format corresponding to its locale. Defaults to the system's locale if none specified. **Syntax:** *`DateTime`*.toLocaleString(formatOpts) **Returns:** String **Type:** Luxon **Parameters:** - `formatOpts` (Object) - optional - Configuration options for the rendering. See [Intl.DateTimeFormat](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/DateTimeFormat/DateTimeFormat#parameters%E2%80%9D) for a full list. Defaults to rendering a short date. **Examples:** ``` $now.toLocaleString() //=> '4/30/2024' $now.toLocaleString({'dateStyle':'medium', 'timeStyle':'short'}) //=> 'Apr 30, 2024, 10:00 PM' // (if in US English locale) ``` ``` $now.setLocale('de-DE').toLocaleString() //=> '30.4.2024' ``` ``` $now.toLocaleString({'dateStyle':'short'}) //=> '4/30/2024' $now.toLocaleString({'dateStyle':'medium'}) //=> 'Apr 30, 2024' $now.toLocaleString({'dateStyle':'long'}) //=> 'April 30, 2024' $now.toLocaleString({'dateStyle':'full'}) //=> 'Tuesday, April 30, 2024' // (if in US English locale) ``` ``` $now.toLocaleString({'year':'numeric', 'month':'numeric', 'day':'numeric'}) //=> '4/30/2024' $now.toLocaleString({'year':'2-digit', 'month':'2-digit', 'day':'2-digit'}) //=> '04/30/24' $now.toLocaleString({'month':'short', 'weekday':'short', 'day':'numeric'}) //=> 'Tue, Apr 30' $now.toLocaleString({'month':'long', 'weekday':'long', 'day':'numeric'}) //=> 'Tuesday, April 30' // (if in US English locale) ``` ``` $now.toLocaleString({'timeStyle':'short'}) //=> '10:00 PM' $now.toLocaleString({'timeStyle':'medium'}) //=> '10:00:58 PM' $now.toLocaleString({'timeStyle':'long'}) //=> '10:00:58 PM GMT+2' $now.toLocaleString({'timeStyle':'full'}) //=> '10:00:58 PM Central European Summer Time' // (if in US English locale) ``` ``` $now.toLocaleString({'hour':'numeric', 'minute':'numeric', hourCycle:'h24'}) //=> '22:00' $now.toLocaleString({'hour':'2-digit', 'minute':'2-digit', hourCycle:'h12'}) //=> '10:00 PM' // (if in US English locale) ``` ## *`DateTime`*.**`toMillis()`** **Description:** Returns a Unix timestamp in milliseconds (the number elapsed since 1st Jan 1970) **Syntax:** *`DateTime`*.toMillis() **Returns:** Number **Type:** Luxon **Examples:** ``` $now.toMillis() //=> 1712334324677 ``` ## *`DateTime`*.**`toRelative()`** **Description:** Returns a textual representation of the time relative to now, e.g. ‘in two days’. Rounds down by default. **Syntax:** *`DateTime`*.toRelative(options) **Returns:** String **Type:** Luxon **Parameters:** - `options` (Object) - optional - Options that affect the output. Possible properties: `unit` = the unit to default to (`years`, `months`, `days`, etc.). `locale` = the language and formatting to use (e.g. `de`, `fr`) **Examples:** ``` $now.plus(1, 'day').toRelative() //=> "in 1 day" ``` ``` $now.plus(1, 'day').toRelative({unit:'hours'}) //=> "in 24 hours" ``` ``` $now.plus(1, 'day').toRelative({locale:'es'}) //=> "dentro de 1 día" ``` ## *`DateTime`*.**`toSeconds()`** **Description:** Returns a Unix timestamp in seconds (the number elapsed since 1st Jan 1970) **Syntax:** *`DateTime`*.toSeconds() **Returns:** Number **Type:** Luxon **Examples:** ``` $now.toSeconds() //=> 1712334442.372 ``` ## *`DateTime`*.**`toString()`** **Description:** Returns a string representation of the DateTime. Similar to `toISO()`. For more formatting options, see `format()` or `toLocaleString()`. **Syntax:** *`DateTime`*.toString() **Returns:** string **Type:** Luxon **Examples:** ``` $now.toString() //=> 2024-04-05T18:44:55.525+02:00 ``` ## *`DateTime`*.**`toUTC()`** **Description:** Converts a DateTime to the UTC time zone. The DateTime still represents the same moment unless specified in the parameters. Use `setZone()` to convert to other zones. **Syntax:** *`DateTime`*.toUTC(offset, opts) **Returns:** DateTime **Type:** Luxon **Parameters:** - `offset` (Number) - optional - An offset from UTC in minutes - `opts` (Object) - optional - Object with options that affect the output. Possible properties: `keepCalendarTime` (boolean): Whether to keep the time the same and only change the offset. Defaults to false. **Examples:** ``` // dt = "2024-01-01T00:00:00.000+02:00".toDateTime() dt.toUTC() //=> 2023-12-31T22:00:00.000Z ``` ## *`DateTime`*.**`weekday`** **Description:** The day of the week. 1 is Monday and 7 is Sunday. **Syntax:** *`DateTime`*.weekday **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.weekday //=> 6 ``` ## *`DateTime`*.**`weekdayLong`** **Description:** The textual long weekday name, e.g. 'Wednesday'. Defaults to the system's locale if no locale has been specified. **Syntax:** *`DateTime`*.weekdayLong **Returns:** String **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.weekdayLong //=> 'Saturday' ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.setLocale('de-DE').weekdayLong //=> 'Samstag' ``` ## *`DateTime`*.**`weekdayShort`** **Description:** The textual abbreviated weekday name, e.g. 'Wed'. Defaults to the system's locale if no locale has been specified. **Syntax:** *`DateTime`*.weekdayShort **Returns:** String **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.weekdayShort //=> 'Sat' ``` ``` // dt = "2024-03-30T18:49".toDateTime() dt.setLocale('fr-FR').weekdayShort //=> 'sam.' ``` ## *`DateTime`*.**`weekNumber`** **Description:** The week number of the year (1-52ish) **Syntax:** *`DateTime`*.weekNumber **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.weekNumber //=> 13 ``` ## *`DateTime`*.**`year`** **Description:** The year **Syntax:** *`DateTime`*.year **Returns:** Number **Type:** Luxon **Examples:** ``` // dt = "2024-03-30T18:49".toDateTime() dt.year //=> 2024 ``` ## *`DateTime`*.**`zone`** **Description:** The time zone associated with the DateTime **Syntax:** *`DateTime`*.zone **Returns:** Object **Type:** Luxon **Examples:** ``` $now.zone //=> {"zoneName": "Europe/Berlin", "valid": true} ``` # ExecData ## `$exec`.**`customData`** **Description:** Set and get custom execution data (e.g. to filter executions by). You can also do this with the ‘Execution Data’ node. [More info](/workflows/executions/custom-executions-data/) **Syntax:** `$exec`.`$exec`.**`customData`** **Returns:** CustomData **Source:** Custom n8n functionality ## `$exec`.**`id`** **Description:** The ID of the current workflow execution **Syntax:** `$exec`.`$exec`.**`id`** **Returns:** String **Source:** Custom n8n functionality ## `$exec`.**`mode`** **Description:** Can be one of 3 values: either `test` (meaning the execution was triggered by clicking a button in n8n) or `production` (meaning the execution was triggered automatically). When running workflow tests, `evaluation` is used. **Syntax:** `$exec`.`$exec`.**`mode`** **Returns:** String **Source:** Custom n8n functionality ## `$exec`.**`resumeFormUrl`** **Description:** The URL to access a form generated by the [’Wait’ node](/integrations/builtin/core-nodes/n8n-nodes-base.wait/). **Syntax:** `$exec`.`$exec`.**`resumeFormUrl`** **Returns:** String **Source:** Custom n8n functionality ## `$exec`.**`resumeUrl`** **Description:** The webhook URL to call to resume a workflow waiting at a [’Wait’ node](/integrations/builtin/core-nodes/n8n-nodes-base.wait/). **Syntax:** `$exec`.`$exec`.**`resumeUrl`** **Returns:** String **Source:** Custom n8n functionality # HTTPResponse ## `$response`.**`body`** **Description:** The body of the response object from the last HTTP call. Only available in the ‘HTTP Request’ node **Syntax:** `$response`.`$response`.**`body`** **Returns:** Object **Source:** Custom n8n functionality ## `$response`.**`headers`** **Description:** The headers returned by the last HTTP call. Only available in the ‘HTTP Request’ node. **Syntax:** `$response`.`$response`.**`headers`** **Returns:** Object **Source:** Custom n8n functionality ## `$response`.**`statusCode`** **Description:** The HTTP status code returned by the last HTTP call. Only available in the ‘HTTP Request’ node. **Syntax:** `$response`.`$response`.**`statusCode`** **Returns:** Number **Source:** Custom n8n functionality ## `$response`.**`statusMessage`** **Description:** An optional message regarding the request status. Only available in the ‘HTTP Request’ node. **Syntax:** `$response`.`$response`.**`statusMessage`** **Returns:** String **Source:** Custom n8n functionality # Item ## `$item`.**`binary`** **Description:** Returns any binary data the item contains **Syntax:** `$item`.`$item`.**`binary`** **Returns:** Array **Source:** Custom n8n functionality ## `$item`.**`json`** **Description:** Returns the JSON data the item contains. [More info](/data/data-structure/) **Syntax:** `$item`.`$item`.**`json`** **Returns:** Object **Source:** Custom n8n functionality # NodeInputData ## `$input`.**`all()`** **Description:** Returns an array of the current node’s input items **Syntax:** `$input`.all(branchIndex?, runIndex?) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch index of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$input`.**`first()`** **Description:** Returns the current node’s first input item **Syntax:** `$input`.first(branchIndex?, runIndex?) **Returns:** Item **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch index of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$input`.**`item`** **Description:** Returns the input item currently being processed **Syntax:** `$input`.`$input`.**`item`** **Returns:** Item **Source:** Custom n8n functionality ## `$input`.**`last()`** **Description:** Returns the current node’s last input item **Syntax:** `$input`.last(branchIndex?, runIndex?) **Returns:** Item **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch index of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$input`.**`params`** **Description:** The configuration settings of the current node. These are the parameters you fill out within the node when configuring it (e.g. its operation). **Syntax:** `$input`.`$input`.**`params`** **Returns:** NodeParams **Source:** Custom n8n functionality # NodeOutputData ## `$()`.**`all()`** **Description:** Returns an array of the node’s output items **Syntax:** `$()`.all(branchIndex?, runIndex?) **Returns:** Array **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$()`.**`first()`** **Description:** Returns the first item output by the node **Syntax:** `$()`.first(branchIndex?, runIndex?) **Returns:** Item **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$()`.**`isExecuted`** **Description:** Is `true` if the node has executed, `false` otherwise **Syntax:** `$()`.`$()`.**`isExecuted`** **Returns:** Boolean **Source:** Custom n8n functionality ## `$()`.**`item`** **Description:** Returns the matching item, i.e. the one used to produce the current item in the current node. [More info](/data/data-mapping/data-item-linking/) **Syntax:** `$()`.`$()`.**`item`** **Returns:** Item **Source:** Custom n8n functionality ## `$()`.**`itemMatching()`** **Description:** Returns the matching item, i.e. the one used to produce the item in the current node at the specified index. [More info](/data/data-mapping/data-item-linking/) **Syntax:** `$()`.itemMatching(currentItemIndex?) **Returns:** Item **Source:** Custom n8n functionality **Parameters:** - `currentItemIndex` (Number) - The index of the item in the current node to be matched with. ## `$()`.**`last()`** **Description:** Returns the last item output by the node **Syntax:** `$()`.last(branchIndex?, runIndex?) **Returns:** Item **Source:** Custom n8n functionality **Parameters:** - `branchIndex` (Number) - optional - The output branch of the node to use. Defaults to the first branch (index 0) - `runIndex` (Number) - optional - The run of the node to use. Defaults to the first run (index 0) ## `$()`.**`params`** **Description:** The configuration settings of the given node. These are the parameters you fill out within the node’s UI (e.g. its operation). **Syntax:** `$()`.`$()`.**`params`** **Returns:** NodeParams **Source:** Custom n8n functionality # Number ## *`Number`*.**`abs()`** **Description:** Returns the number’s absolute value, i.e. removes any minus sign **Syntax:** *`Number`*.abs() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // x = -1.7 x.abs() //=> 1.7 ``` ## *`Number`*.**`ceil()`** **Description:** Rounds the number up to the next whole number **Syntax:** *`Number`*.ceil() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // x = 1.234 x.ceil() //=> 2 ``` ## *`Number`*.**`floor()`** **Description:** Rounds the number down to the nearest whole number **Syntax:** *`Number`*.floor() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` // x = 1.234 x.floor() //=> 1 ``` ## *`Number`*.**`format()`** **Description:** Returns a formatted string representing the number. Useful for formatting for a specific language or currency. The same as [`Intl.NumberFormat()`](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat%E2%80%9D). **Syntax:** *`Number`*.format(locale?, options?) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `locale` (String) - optional - A [locale tag](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl#locales_argument%E2%80%9D) for formatting the number, e.g. `fr-FR`, `en-GB`, `pr-BR` - `options` (Object) - optional - Configuration options for number formatting. [More info](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat) **Examples:** ``` // number = 123456.789; number.format('de-DE') //=> 123.456,789 ``` ``` // number = 123456.789; number.format('de-DE', {'style': 'currency', 'currency': 'EUR'}) //=> 123.456,79 € ``` ## *`Number`*.**`isEmpty()`** **Description:** Returns `false` for all numbers. Returns `true` for `null`. **Syntax:** *`Number`*.isEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // num = 10 num.isEmpty() // => false ``` ``` // num = 0 num.isEmpty() // => false ``` ``` // num = null num.isEmpty() // => true ``` ## *`Number`*.**`isEven()`** **Description:** Returns `true` if the number is even. Throws an error if the number isn’t a whole number. **Syntax:** *`Number`*.isEven() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // number = 33 number.isEven() //=> false ``` ## *`Number`*.**`isInteger()`** **Description:** Returns `true` if the number is a whole number **Syntax:** *`Number`*.isInteger() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // number = 4 number.isInteger() //=> true ``` ``` // number = 4.12 number.isInteger() //=> false ``` ## *`Number`*.**`isOdd()`** **Description:** Returns `true` if the number is odd. Throws an error if the number isn’t a whole number. **Syntax:** *`Number`*.isOdd() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // number = 33 number.isOdd() //=> true ``` ## *`Number`*.**`round()`** **Description:** Returns the number rounded to the nearest whole number (or specified number of decimal places) **Syntax:** *`Number`*.round(decimalPlaces?) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `decimalPlaces` (Number) - optional - The number of decimal places to round to **Examples:** ``` // number = 1.256 number.round() //=> 1 ``` ``` // number = 1.256 number.round(1) //=> 1.3 number.round(2) //=> 1.26 ``` ## *`Number`*.**`toBoolean()`** **Description:** Converts the number to a boolean value. `0` becomes `false`; everything else becomes `true`. **Syntax:** *`Number`*.toBoolean() **Source:** Custom n8n functionality **Examples:** ``` // number = 12 number.toBoolean() //=> true ``` ``` // number = 0 number.toBoolean() //=> false ``` ## *`Number`*.**`toDateTime()`** **Description:** Converts a numerical timestamp into a DateTime. The format of the timestamp must be specified if it’s not in milliseconds. Uses the time zone in n8n (or in the workflow’s settings). **Syntax:** *`Number`*.toDateTime(format?) **Returns:** DateTime **Source:** Custom n8n functionality **Parameters:** - `format` (String) - optional - The type of timestamp to convert. Options are `ms` (for Unix timestamp in milliseconds), `s` (for Unix timestamp in seconds) or `excel` (for days since 1900). **Examples:** ``` // ts = 1708695471 ts.toDateTime('s') //=> 2024-02-23T14:37:51+01:00 ``` ``` // ts = 1708695471000 ts.toDateTime('ms') //=> 2024-02-23T14:37:51+01:00 ``` ``` // ts = 45345 ts.toDateTime('excel') //=> 2024-02-23T01:00:00+01:00 ``` ## *`Number`*.**`toLocaleString()`** **Description:** Returns a localised string representing the number, i.e. in the language and format corresponding to its locale. Defaults to the system's locale if none specified. **Syntax:** *`Number`*.toLocaleString(locales?, options?) **Returns:** String **Source:** JavaScript function **Parameters:** - `locales` (String|Array) - optional - The locale to assign, e.g. ‘en-GB’ for British English or ‘pt-BR’ for Brazilian Portuguese. See [full list](%E2%80%9Dhttps://www.localeplanet.com/icu/%E2%80%9D) (unofficial). Also accepts an array of locales. Defaults to the system locale if not specified. - `options` (Object) - optional - An object with [formatting options](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Intl/NumberFormat/NumberFormat#parameters%E2%80%9D) **Examples:** ``` // num = 500000.125 num.toLocaleString() //=> '500,000.125' (if in US English locale) ``` ``` // num = 500000.125 num.toLocaleString('fr-FR') //=> '500 000,125' ``` ``` // num = 500000.125 num.toLocaleString('fr-FR', {style:'currency', currency:'EUR'}) //=> '500 000,13 €' ``` ## *`Number`*.**`toString()`** **Description:** Converts the number to a simple textual representation. For more formatting options, see `toLocaleString()`. **Syntax:** *`Number`*.toString(radix?) **Returns:** String **Source:** JavaScript function **Parameters:** - `radix` (Number) - optional - The base to use. Must be an integer between 2 and 36. E.g. base `2` is binary and base `16` is hexadecimal. **Examples:** ``` // num = 500000.125 num.toString() //=> '500000.125' ``` ``` // num = 500000.125 num.toString(16) //=> '7a120.2' ``` # Object ## *`Object`*.**`compact()`** **Description:** Removes all fields that have empty values, i.e. are `null` or `""` **Syntax:** *`Object`*.compact() **Returns:** Object **Source:** Custom n8n functionality **Examples:** ``` // obj = {'x':null, 'y':2, 'z':''} obj.compact() //=> {'y':2} ``` ## *`Object`*.**`hasField()`** **Description:** Returns `true` if there is a field called `name`. Only checks top-level keys. Comparison is case-sensitive. **Syntax:** *`Object`*.hasField(name) **Returns:** Boolean **Source:** Custom n8n functionality **Parameters:** - `name` (String) - The name of the key to search for **Examples:** ``` // obj = {'name':'Nathan', 'age':42} obj.hasField('name') //=> true ``` ``` // obj = {'name':'Nathan', 'age':42} obj.hasField('Name') //=> false obj.hasField('inventedField') //=> false ``` ## *`Object`*.**`isEmpty()`** **Description:** Returns `true` if the Object has no keys (fields) set or is `null` **Syntax:** *`Object`*.isEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name': 'Nathan'} obj.isEmpty() //=> false ``` ``` // obj = {} obj.isEmpty() //=> true ``` ## *`Object`*.**`isNotEmpty()`** **Description:** Returns `true` if the Object has at least one key (field) set **Syntax:** *`Object`*.isNotEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name': 'Nathan'} obj.isNotEmpty() //=> true ``` ``` // obj = {} obj.isNotEmpty() //=> false ``` ## *`Object`*.**`keepFieldsContaining()`** **Description:** Removes any fields whose values don’t at least partly match the given `value`. Comparison is case-sensitive. Fields that aren’t strings will always be removed. **Syntax:** *`Object`*.keepFieldsContaining(value) **Returns:** Object **Source:** Custom n8n functionality **Parameters:** - `value` (String) - The text that a value must contain in order to be kept **Examples:** ``` // obj = {'name': 'Mr Nathan', 'city':'hanoi', age: 42 } obj.keepFieldsContaining('Nathan') //=> {'name': 'Mr Nathan'} ``` ``` // obj = {'name': 'Mr Nathan', 'city':'hanoi', age: 42 } obj.keepFieldsContaining('nathan') //=> {} obj.keepFieldsContaining('han') //=> {'name': 'Mr Nathan', 'city':'hanoi'} ``` ## *`Object`*.**`keys()`** **Description:** Returns an array with all the field names (keys) the object contains. The same as JavaScript’s `Object.keys(obj)`. **Syntax:** *`Object`*.keys() **Returns:** Array **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name': 'Mr Nathan', age: 42 } obj.keys() //=> ['name', 'age'] ``` ## *`Object`*.**`merge()`** **Description:** Merges the two Objects into a single one. If a key (field name) exists in both Objects, the value from the first (base) Object is used. **Syntax:** *`Object`*.merge(otherObject) **Returns:** Object **Source:** Custom n8n functionality **Parameters:** - `otherObject` (Object) - The Object to merge with the base Object. **Examples:** ``` // obj1 = {'name':'Nathan', 'age': 42} // obj2 = {'name':'Jan', 'city': 'hanoi'} obj1.merge(obj2) //=> {'name':'Jan', 'city': 'hanoi', 'age':42} ``` ## *`Object`*.**`removeField()`** **Description:** Removes a field from the Object. The same as JavaScript’s `delete`. **Syntax:** *`Object`*.removeField(key) **Returns:** Object **Source:** Custom n8n functionality **Parameters:** - `key` (String) - The name of the field to remove **Examples:** ``` // obj = {'name':'Nathan', 'city':'hanoi'} obj.removeField('name') //=> {'city':'hanoi'} ``` ## *`Object`*.**`removeFieldsContaining()`** **Description:** Removes keys (fields) whose values at least partly match the given `value`. Comparison is case-sensitive. Fields that aren’t strings are always kept. **Syntax:** *`Object`*.removeFieldsContaining(value) **Returns:** Object **Source:** Custom n8n functionality **Parameters:** - `value` (String) - The text that a value must contain in order to be removed **Examples:** ``` // obj = {'name': 'Mr Nathan', 'city':'hanoi', age: 42} obj.removeFieldsContaining('Nathan') //=> {'city':'hanoi', age: 42} ``` ``` // obj = {'name': 'Mr Nathan', 'city':'hanoi', age: 42} obj.removeFieldsContaining('han') //=> {age: 42} obj.removeFieldsContaining('nathan') //=> {'name': 'Mr Nathan', 'city':'hanoi', age: 42} ``` ## *`Object`*.**`toJsonString()`** **Description:** Converts the Object to a JSON string. Similar to JavaScript’s `JSON.stringify()`. **Syntax:** *`Object`*.toJsonString() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name':'Nathan', age:42} obj.toJsonString() //=> '{"name":"Nathan","age":42}' ``` ## *`Object`*.**`urlEncode()`** **Description:** Generates a URL parameter string from the Object’s keys and values. Only top-level keys are supported. **Syntax:** *`Object`*.urlEncode() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name':'Mr Nathan', 'city':'hanoi'} obj.urlEncode() //=> 'name=Mr+Nathan&city=hanoi' ``` ## *`Object`*.**`values()`** **Description:** Returns an array with all the values of the fields the Object contains. The same as JavaScript’s `Object.values(obj)`. **Syntax:** *`Object`*.values() **Returns:** Array **Source:** Custom n8n functionality **Examples:** ``` // obj = {'name': 'Mr Nathan', age: 42 } obj.values() //=> ['Mr Nathan', 42] ``` # PrevNodeData ## **`name`** **Description:** The name of the node that the current input came from. Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). **Syntax:** **`name`** **Returns:** String **Source:** Custom n8n functionality ## **`outputIndex`** **Description:** The index of the output connector that the current input came from. Use this when the previous node had multiple outputs (such as an ‘If’ or ‘Switch’ node). Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). **Syntax:** **`outputIndex`** **Returns:** Number **Source:** Custom n8n functionality ## **`runIndex`** **Description:** The run of the previous node that generated the current input. Always uses the current node’s first input connector if there is more than one (e.g. in the ‘Merge’ node). **Syntax:** **`runIndex`** **Returns:** Number **Source:** Custom n8n functionality # Root ## **`$()`** **Description:** Returns the data of the specified node **Syntax:** $(nodeName) **Returns:** NodeData **Source:** Custom n8n functionality **Parameters:** - `nodeName` (String) - The name of the node to retrieve data for ## **`$binary`** **Description:** Returns any binary input data to the current node, for the current item. Shorthand for `$input.item.binary`. **Syntax:** **`$binary`** **Returns:** Array **Source:** Custom n8n functionality ## **`$execution`** **Description:** Retrieve or set metadata for the current execution **Syntax:** **`$execution`** **Returns:** ExecData **Source:** Custom n8n functionality ## **`$fromAI()`** **Description:** Use when a large language model should provide the value of a node parameter. Consider providing a description for better results. **Syntax:** $fromAI(key, description?, type?, defaultValue?) **Returns:** any **Source:** Custom n8n functionality **Parameters:** - `key` (String) - The name of the field to fetch. May only contain letters, numbers, underscores and hyphens. - `description` (String) - optional - Use to give the model more context on exactly what it should return - `type` (String) - optional - The type of the value to return. One of `string`, `number`, `boolean`, `json`, `date`, `datetime`. Defaults to `string`. - `defaultValue` (any) - optional - A value to use if the model doesn’t return the key **Examples:** ``` // Ask the model to provide a name, and use it here $fromAI('name') ``` ``` // Ask the model to provide the age of the person (as a number with a default value of 18), and use it here $fromAI('age', 'The age of the person', 'number', 18) ``` ``` // Ask the model to provide a boolean signifying whether the person is a student (with default value false), and use it here $fromAI('isStudent', 'Is the person a student', 'boolean', false) ``` ## **`$if()`** **Description:** Returns one of two values depending on the `condition`. Similar to the `?` operator in JavaScript. **Syntax:** $if(condition, valueIfTrue, valueIfFalse) **Returns:** any **Source:** Custom n8n functionality **Parameters:** - `condition` (Boolean) - The check to make. Should evaluate to either `true` or `false` - `valueIfTrue` (any) - The value to return if the condition is true - `valueIfFalse` (any) - The value to return if the condition is false **Examples:** ``` // Return "Good day" if time is before 5pm, otherwise "Good evening" $if($now.hour < 17, "Good day", "Good evening") ``` ``` // $if() calls can be combined: // Return "Good morning" if time is before 10am, "Good day" it's before 5pm, otherwise "Good evening" $if($now.hour < 10, "Good morning", $if($now.hour < 17, "Good day", "Good evening")) ``` ## **`$ifEmpty()`** **Description:** Returns the first parameter if it isn’t empty, otherwise returns the second parameter. The following count as empty: `””`, `[]`, `{}`, `null`, `undefined` **Syntax:** $ifEmpty(value, valueIfEmpty) **Returns:** any **Source:** Custom n8n functionality **Parameters:** - `value` (any) - The value to return, provided it isn’t empty - `valueIfEmpty` (any) - What to return if `value` is empty **Examples:** ``` "Hi " + $ifEmpty(name, "there") // e.g. "Hi Nathan" or "Hi there" ``` ## **`$input`** **Description:** The input data of the current node **Syntax:** **`$input`** **Returns:** NodeData **Source:** Custom n8n functionality ## **`$itemIndex`** **Description:** The position of the item currently being processed in the list of input items **Syntax:** **`$itemIndex`** **Returns:** Number **Source:** Custom n8n functionality ## **`$jmespath()`** **Description:** Extracts data from an object (or array of objects) using a [JMESPath](%E2%80%9D/code/cookbook/jmespath/%E2%80%9D) expression. Useful for querying complex, nested objects. Returns `undefined` if the expression is invalid. **Syntax:** $jmespath(obj, expression) **Returns:** any **Source:** Custom n8n functionality **Parameters:** - `obj` (Object|Array) - The Object or array of Objects to retrieve data from - `expression` (String) - A [JMESPath expression](%E2%80%9Dhttps://jmespath.org/examples.html%E2%80%9D) defining the data to retrieve from the object **Examples:** ``` data = { "people": [ { "age": 20, "other": "foo", "name": "Bob" }, { "age": 25, "other": "bar", "name": "Fred" }, { "age": 30, "other": "baz", "name": "George" } ] } // Get all names, in an array {{ $jmespath(data, '[*].name') }} //=> ["Bob", "Fred", "George"] // Get the names and ages of everyone under 20 $jmespath(data, '[?age > `20`].[name, age]') //=> [ ["Fred",25], ["George",30] ] // Get the name of the first person under 20 $jmespath($json.people, '[?age > `20`].name | [0]') //=> Fred ``` ``` data = { "reservations": [ { "id": 1, "guests": [ { "name": "Nathan", "requirements": { "room": "double", "meal": "vegetarian" } }, { "name": "Meg", "requirements": { "room": "single" } } ] }, { "id": 2, "guests": [ { "name": "Lex", "requirements": { "room": "double" } } ] } ] } // Get the names of all the guests in each reservation that require a double room $jmespath(data, 'reservations[].guests[?requirements.room==`double`].name') ``` ## **`$json`** **Description:** Returns the JSON input data to the current node, for the current item. Shorthand for `$input.item.json`. [More info](/data/data-structure/) **Syntax:** **`$json`** **Returns:** Object **Source:** Custom n8n functionality ## **`$max()`** **Description:** Returns the highest of the given numbers **Syntax:** $max(num1, num2, …, numN) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `num1` (Number) - The first number to compare - `num2` (Number) - The second number to compare ## **`$min()`** **Description:** Returns the lowest of the given numbers **Syntax:** $min(num1, num2, …, numN) **Returns:** Number **Source:** Custom n8n functionality **Parameters:** - `num1` (Number) - The first number to compare - `num2` (Number) - The second number to compare ## **`$nodeVersion`** **Description:** The version of the current node (as displayed at the bottom of the nodes’s settings pane) **Syntax:** **`$nodeVersion`** **Returns:** String **Source:** Custom n8n functionality ## **`$now`** **Description:** A DateTime representing the current moment. Uses the workflow’s time zone (which can be changed in the workflow settings). **Syntax:** **`$now`** **Returns:** DateTime **Source:** Custom n8n functionality ## **`$pageCount`** **Description:** The number of results pages the node has fetched. Only available in the ‘HTTP Request’ node. **Syntax:** **`$pageCount`** **Returns:** Number **Source:** Custom n8n functionality ## **`$parameter`** **Description:** The configuration settings of the current node. These are the parameters you fill out within the node’s UI (e.g. its operation). **Syntax:** **`$parameter`** **Returns:** NodeParams **Source:** Custom n8n functionality ## **`$prevNode`** **Description:** Information about the node that the current input came from. When in a ‘Merge’ node, always uses the first input connector. **Syntax:** **`$prevNode`** **Returns:** PrevNodeData **Source:** Custom n8n functionality ## **`$request`** **Description:** The request object sent during the last run of the node. Only available in the ‘HTTP Request’ node. **Syntax:** **`$request`** **Returns:** Object **Source:** Custom n8n functionality ## **`$response`** **Description:** The response returned by the last HTTP call. Only available in the ‘HTTP Request’ node. **Syntax:** **`$response`** **Returns:** HTTPResponse **Source:** Custom n8n functionality ## **`$runIndex`** **Description:** The index of the current run of the current node execution. Starts at 0. **Syntax:** **`$runIndex`** **Returns:** Number **Source:** Custom n8n functionality ## **`$secrets`** **Description:** The secrets from an [external secrets vault](/external-secrets/), if configured. Secret values are never displayed to the user. Only available in credential fields. **Syntax:** **`$secrets`** **Returns:** Object **Source:** Custom n8n functionality ## **`$today`** **Description:** A DateTime representing midnight at the start of the current day. Uses the instance’s time zone (unless overridden in the workflow’s settings). **Syntax:** **`$today`** **Returns:** DateTime **Source:** Custom n8n functionality ## **`$vars`** **Description:** The [variables](/code/variables/) available to the workflow **Syntax:** **`$vars`** **Returns:** Object **Source:** Custom n8n functionality ## **`$workflow`** **Description:** Information about the current workflow **Syntax:** **`$workflow`** **Returns:** WorkflowData **Source:** Custom n8n functionality # String ## *`String`*.**`base64Decode()`** **Description:** Converts plain text to a base64-encoded string **Syntax:** *`String`*.base64Encode() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "aGVsbG8=".base64Decode() //=> "hello" ``` ## *`String`*.**`base64Encode()`** **Description:** Converts a base64-encoded string to plain text **Syntax:** *`String`*.base64Encode() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "hello".base64Encode() //=> "aGVsbG8=" ``` ## *`String`*.**`concat()`** **Description:** Joins one or more strings onto the end of the base string. Alternatively, use the `+` operator (see examples). **Syntax:** *`String`*.concat(string1, string2?, ..., stringN?) **Returns:** String **Source:** JavaScript function **Parameters:** - `string1` (String) - The first string to append - `string2` (String) - optional - The second string to append - `stringN` (String) - optional - The Nth string to append **Examples:** ``` 'sea'.concat('food') //=> 'seafood' 'sea' + 'food' //=> 'seafood' ``` ``` 'work'.concat('a', 'holic') //=> 'workaholic' ``` ## *`String`*.**`extractDomain()`** **Description:** If the string is an email address or URL, returns its domain (or `undefined` if nothing found). If the string also contains other content, try using `extractEmail()` or `extractUrl()` first. **Syntax:** *`String`*.extractDomain() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "me@example.com".extractDomain() //=> 'example.com' ``` ``` "http://n8n.io/workflows".extractDomain() //=> 'n8n.io' ``` ``` "It's me@example.com".extractEmail().extractDomain() //=> 'example.com' ``` ## *`String`*.**`extractEmail()`** **Description:** Extracts the first email found in the string. Returns `undefined` if none is found. **Syntax:** *`String`*.extractEmail() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "My email is me@example.com".extractEmail() //=> 'me@example.com' ``` ## *`String`*.**`extractUrl()`** **Description:** Extracts the first URL found in the string. Returns `undefined` if none is found. Only recognizes full URLs, e.g. those starting with `http`. **Syntax:** *`String`*.extractUrl() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "Check out http://n8n.io".extractUrl() //=> 'http://n8n.io' ``` ## *`String`*.**`extractUrlPath()`** **Description:** Returns the part of a URL after the domain, or `undefined` if no URL found. If the string also contains other content, try using `extractUrl()` first. **Syntax:** *`String`*.extractUrlPath() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "http://n8n.io/workflows".extractUrlPath() //=> '/workflows' ``` ``` "Check out http://n8n.io/workflows".extractUrl().extractUrlPath() //=> '/workflows' ``` ## *`String`*.**`hash()`** **Description:** Returns the string hashed with the given algorithm. Defaults to md5 if not specified. **Syntax:** *`String`*.hash(algo?) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `algo` (String) - optional - The hashing algorithm to use. One of `md5`, `base64`, `sha1`, `sha224`, `sha256`, `sha384`, `sha512`, `sha3`, `ripemd160` **Examples:** ``` "hello".hash() //=> '5d41402abc4b2a76b9719d911017c592' ``` ## *`String`*.**`includes()`** **Description:** Returns `true` if the string contains the `searchString`. Case-sensitive. **Syntax:** *`String`*.includes(searchString, start?) **Returns:** Boolean **Source:** JavaScript function **Parameters:** - `searchString` (String) - The text to search for - `start` (Number) - optional - The position (index) to start searching from **Examples:** ``` 'team'.includes('tea') //=> true 'team'.includes('i') //=> false ``` ``` // Returns false if the case doesn't match, so consider using .toLowerCase() first 'team'.includes('Tea') //=> false 'Team'.toLowerCase().includes('tea') //=> true ``` ## *`String`*.**`indexOf()`** **Description:** Returns the index (position) of the first occurrence of `searchString` within the base string, or -1 if not found. Case-sensitive. **Syntax:** *`String`*.indexOf(searchString, start?) **Returns:** Number **Source:** JavaScript function **Parameters:** - `searchString` (String) - The text to search for - `start` (Number) - optional - The position (index) to start searching from **Examples:** ``` 'steam'.indexOf('tea') //=> 1 'steam'.indexOf('i') //=> -1 ``` ``` // Returns -1 if the case doesn't match, so consider using .toLowerCase() first 'STEAM'.indexOf('tea') //=> -1 'STEAM'.toLowerCase().indexOf('tea') //=> 1 ``` ## *`String`*.**`isDomain()`** **Description:** Returns `true` if the string is a domain **Syntax:** *`String`*.isDomain() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "n8n.io".isDomain() //=> true ``` ``` "http://n8n.io".isDomain() //=> false ``` ``` "hello".isDomain() //=> false ``` ## *`String`*.**`isEmail()`** **Description:** Returns `true` if the string is an email **Syntax:** *`String`*.isEmail() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "me@example.com".isEmail() //=> true ``` ``` "It's me@example.com".isEmail() //=> false ``` ``` "hello".isEmail() //=> false ``` ## *`String`*.**`isEmpty()`** **Description:** Returns `true` if the string has no characters or is `null` **Syntax:** *`String`*.isEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "".isEmpty() // => true ``` ``` "hello".isEmpty() // => false ``` ## *`String`*.**`isNotEmpty()`** **Description:** Returns `true` if the string has at least one character **Syntax:** *`String`*.isNotEmpty() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "hello".isNotEmpty() // => true ``` ``` "".isNotEmpty() // => false ``` ## *`String`*.**`isNumeric()`** **Description:** Returns `true` if the string represents a number **Syntax:** *`String`*.isNumeric() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "1.2234".isNumeric() // true ``` ``` "hello".isNumeric() // false ``` ``` "123E23".isNumeric() // true ``` ## *`String`*.**`isUrl()`** **Description:** Returns `true` if the string is a valid URL **Syntax:** *`String`*.isUrl() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "https://n8n.io".isUrl() //=> true ``` ``` "n8n.io".isUrl() //=> false ``` ``` "hello".isUrl() //=> false ``` ## *`String`*.**`length`** **Description:** The number of characters in the string **Syntax:** *`String`*.length **Returns:** Number **Source:** JavaScript function **Examples:** ``` "hello".length //=> 5 ``` ## *`String`*.**`match()`** **Description:** Matches the string against a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). Returns an array containing the first match, or all matches if the `g` flag is set in the regular expression. Returns `null` if no matches are found. For checking whether text is present, consider `includes()` instead. **Syntax:** *`String`*.match(regexp) **Returns:** Array **Source:** JavaScript function **Parameters:** - `regexp` (RegExp) - A [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D) with the pattern to look for. Will look for multiple matches if the `g` flag is present (see examples). **Examples:** ``` // Match all words starting with 'r' "rock and roll".match(/r[^ ]*/g) //=> ['rock', 'roll'] ``` ``` // Match first word starting with 'r' (no 'g' flag) "rock and roll".match(/r[^ ]*/) //=> ['rock'] ``` ``` // For case-insensitive, add 'i' flag "ROCK and roll".match(/r[^ ]*/ig) //=> ['ROCK', 'roll'] ``` ## *`String`*.**`parseJson()`** **Description:** Returns the JavaScript Object or value represented by the string, or `undefined` if the string isn’t valid JSON. Single-quoted JSON is not supported. **Syntax:** *`String`*.parseJson() **Returns:** any **Source:** Custom n8n functionality **Examples:** ``` '{"name":"Nathan"}'.parseJson() //=> {"name":"Nathan"} ``` ``` "{'name':'Nathan'}".parseJson() //=> undefined ``` ``` 'hello'.parseJson() //=> undefined ``` ## *`String`*.**`quote()`** **Description:** Wraps a string in quotation marks, and escapes any quotation marks already in the string. Useful when constructing JSON, SQL, etc. **Syntax:** *`String`*.quote(mark?) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `mark` (String) - optional - The type of quotation mark to use **Examples:** ``` 'Nathan says "hi"'.quote() //=> '"Nathan says \"hi\""' ``` ## *`String`*.**`removeMarkdown()`** **Description:** Removes any Markdown formatting from the string. Also removes HTML tags. **Syntax:** *`String`*.removeMarkdown() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "*bold*, [link]()".removeMarkdown() //=> "bold, link" ``` ## *`String`*.**`removeTags()`** **Description:** Removes tags, such as HTML or XML, from the string **Syntax:** *`String`*.removeTags() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "bold, link".removeTags() //=> "bold, link" ``` ## *`String`*.**`replace()`** **Description:** Returns a string with the first occurrence of `pattern` replaced by `replacement`. To replace all occurrences, use `replaceAll()` instead. **Syntax:** *`String`*.replace(pattern, replacement) **Returns:** String **Source:** JavaScript function **Parameters:** - `pattern` (String|RegExp) - The pattern in the string to replace. Can be a string to match or a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). - `replacement` (String) - The new text to replace with **Examples:** ``` 'Red or blue or green'.replace('or', 'and') //=> 'Red and blue or green' ``` ``` // A global, case-insensitive replacement: let text = "Mr Blue has a blue house and a blue car"; let result = text.replace(/blue/gi, "red"); ``` ``` // A function to return the replacement text: let text = "Mr Blue has a blue house and a blue car"; let result = text.replace(/blue|house|car/i, function (x) {   return x.toUpperCase(); }); ``` ## *`String`*.**`replaceAll()`** **Description:** Returns a string with all occurrences of `pattern` replaced by `replacement` **Syntax:** *`String`*.replaceAll(pattern, replacement) **Returns:** String **Source:** JavaScript function **Parameters:** - `pattern` (String|RegExp) - The pattern in the string to replace. Can be a string to match or a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). - `replacement` (String|function) - The new text to replace with. Can be a string or a function that returns a string (see examples). **Examples:** ``` 'Red or blue or green'.replace('or', 'and') //=> 'Red and blue and green' ``` ``` // Uppercase any occurrences of 'blue' or 'car' // (You must include the 'g' flag when using a regex) // text = 'Mr Blue has a blue car' text.replaceAll(/blue|car/gi, x => x.toUpperCase()) //=> 'Mr BLUE has a BLUE CAR' // Or with traditional function notation: text.replaceAll(/blue|car/gi, function(x){return x.toUpperCase()}) //=> 'Mr BLUE has a BLUE CAR' ``` ## *`String`*.**`replaceSpecialChars()`** **Description:** Replaces special characters in the string with the closest ASCII character **Syntax:** *`String`*.replaceSpecialChars() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "déjà".replaceSpecialChars() //=> "deja" ``` ## *`String`*.**`search()`** **Description:** Returns the index (position) of the first occurrence of a pattern within the string, or -1 if not found. The pattern is specified using a [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D). To use text instead, see `indexOf()`. **Syntax:** *`String`*.search(regexp) **Returns:** Number **Source:** JavaScript function **Parameters:** - `regexp` (RegExp) - A [regular expression](%E2%80%9Dhttps://developer.mozilla.org/en-US/docs/Web/JavaScript/Guide/Regular_expressions%E2%80%9D) with the pattern to look for **Examples:** ``` // Pos of first word starting with 'n' "Neat n8n node".search(/n[^ ]*/) //=> 5 ``` ``` // Case-insensitive match with 'i' // Pos of first word starting with 'n' or 'N' "Neat n8n node".search(/n[^ ]*/i) //=> 0 ``` ## *`String`*.**`slice()`** **Description:** Extracts a fragment of the string at the given position. For more advanced extraction, see `match()`. **Syntax:** *`String`*.slice(start, end?) **Returns:** String **Source:** JavaScript function **Parameters:** - `start` (Number) - The position to start from. Positions start at 0. Negative numbers count back from the end of the string. - `end` (String) - optional - The position to select up to. The character at the end position is not included. Negative numbers select from the end of the string. If omitted, will extract to the end of the string. **Examples:** ``` 'Hello from n8n'.slice(0, 5) //=> 'Hello' ``` ``` 'Hello from n8n'.slice(6) //=> 'from n8n' ``` ``` 'Hello from n8n'.slice(-3) //=> 'n8n' ``` ## *`String`*.**`split()`** **Description:** Splits the string into an array of substrings. Each split is made at the `separator`, and the separator isn’t included in the output. The opposite of using `join()` on an array. **Syntax:** *`String`*.split(separator?, limit?) **Returns:** Array **Source:** JavaScript function **Parameters:** - `separator` (String) - optional - The string (or regular expression) to use for splitting. If omitted, an array with the original string is returned. - `limit` (Number) - optional - The max number of array elements to return. Returns all elements if omitted. **Examples:** ``` "wind,fire,water".split(",") //=> ['wind', 'fire', 'water'] ``` ``` "me and you and her".split("and") //=> ['me ', ' you ', ' her'] ``` ``` // Split one or more of space, comma and '?' using a regular expression "me? you, and her".split(/[ ,?]+/) //=> ['me', 'you', 'and', 'her'] ``` ## *`String`*.**`startsWith()`** **Description:** Returns `true` if the string starts with `searchString`. Case-sensitive. **Syntax:** *`String`*.startsWith(searchString, start?) **Returns:** Boolean **Source:** JavaScript function **Parameters:** - `searchString` (String) - The text to check against the start of the base string - `start` (Number) - optional - The position (index) to start searching from **Examples:** ``` 'team'.startsWith('tea') //=> true 'team'.startsWith('Tea') //=> false ``` ``` // Returns false if the case doesn't match, so consider using .toLowerCase() first 'Team'.toLowerCase().startsWith('tea') //=> true ``` ## *`String`*.**`substring()`** **Description:** Extracts a fragment of the string at the given position. For more advanced extraction, see `match()`. **Syntax:** *`String`*.substring(start, end?) **Returns:** String **Source:** JavaScript function **Parameters:** - `start` (Number) - The position to start from. Positions start at 0. - `end` (String) - optional - The position to select up to. The character at the end position is not included. If omitted, will extract to the end of the string. **Examples:** ``` 'Hello from n8n'.substring(0, 5) //=> 'Hello' ``` ``` 'Hello from n8n'.substring(6) //=> 'from n8n' ``` ## *`String`*.**`toBoolean()`** **Description:** Converts the string to a boolean value. `0`, `false` and `no` resolve to `false`, everything else to `true`. Case-insensitive. **Syntax:** *`String`*.toBoolean() **Returns:** Boolean **Source:** Custom n8n functionality **Examples:** ``` "true".toBoolean() //=> true ``` ``` "false".toBoolean() //=> false ``` ``` "0".toBoolean() //=> false ``` ``` "hello".toBoolean() //=> true ``` ## *`String`*.**`toDateTime()`** **Description:** Converts the string to a DateTime. Useful for further transformation. Supported formats for the string are ISO 8601, HTTP, RFC2822, SQL and Unix timestamp in milliseconds. To parse other formats, use [`DateTime.fromFormat()`](%E2%80%9Dhttps://moment.github.io/luxon/api-docs/index.html#datetimefromformat%E2%80%9D). **Syntax:** *`String`*.toDateTime() **Returns:** DateTime **Source:** Custom n8n functionality **Examples:** ``` "2024-03-29T18:06:31.798+01:00".toDateTime() ``` ``` "Fri, 29 Mar 2024 18:08:01 +0100".toDateTime() ``` ``` "20240329".toDateTime() ``` ``` "1711732132990".toDateTime() ``` ## *`String`*.**`toJsonString()`** **Description:** Prepares the string to be inserted into a JSON object. Escapes any quotes and special characters (e.g. new lines), and wraps the string in quotes. The same as JavaScript’s `JSON.stringify()`. **Syntax:** *`String`*.toJsonString() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` // str = 'The "best" colours: red\nbrown' str.toJsonString() //=> '"The \\"best\\" colours: red\\nbrown"' ``` ## *`String`*.**`toLowerCase()`** **Description:** Converts all letters in the string to lower case **Syntax:** *`String`*.toLowerCase() **Returns:** String **Source:** JavaScript function **Examples:** ``` "I'm SHOUTing".toLowerCase() //=> "i'm shouting" ``` ## *`String`*.**`toNumber()`** **Description:** Converts a string representing a number to a number. Throws an error if the string doesn’t start with a valid number. **Syntax:** *`String`*.toNumber() **Returns:** Number **Source:** Custom n8n functionality **Examples:** ``` "123".toNumber() //=> 123 ``` ``` "1.23E10".toNumber() //=> 12300000000 ``` ## *`String`*.**`toSentenceCase()`** **Description:** Changes the capitalization of the string to sentence case. The first letter of each sentence is capitalized and all others are lowercased. **Syntax:** *`String`*.toSentenceCase() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "quick! brown FOX".toSentenceCase() //=> "Quick! Brown fox" ``` ## *`String`*.**`toSnakeCase()`** **Description:** Changes the format of the string to snake case. Spaces and dashes are replaced by `_`, symbols are removed and all letters are lowercased. **Syntax:** *`String`*.toSnakeCase() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "quick brown $FOX".toSnakeCase() //=> "quick_brown_fox" ``` ## *`String`*.**`toTitleCase()`** **Description:** Changes the capitalization of the string to title case. The first letter of each word is capitalized and the others left unchanged. Short prepositions and conjunctions aren’t capitalized (e.g. ‘a’, ‘the’). **Syntax:** *`String`*.toTitleCase() **Returns:** String **Source:** Custom n8n functionality **Examples:** ``` "quick a brown FOX".toTitleCase() //=> "Quick a Brown Fox" ``` ## *`String`*.**`toUpperCase()`** **Description:** Converts all letters in the string to upper case (capitals) **Syntax:** *`String`*.toUpperCase() **Source:** JavaScript function **Examples:** ``` "I'm not angry".toUpperCase() //=> "I'M NOT ANGRY" ``` ## *`String`*.**`trim()`** **Description:** Removes whitespace from both ends of the string. Whitespace includes new lines, tabs, spaces, etc. **Syntax:** *`String`*.trim() **Returns:** String **Source:** JavaScript function **Examples:** ``` ' lonely '.trim() //=> 'lonely' ``` ## *`String`*.**`urlDecode()`** **Description:** Decodes a URL-encoded string. Replaces any character codes in the form of `%XX` with their corresponding characters. **Syntax:** *`String`*.urlDecode(allChars?) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `allChars` (Boolean) - optional - Whether to decode characters that are part of the URI syntax (e.g. `=`, `?`) **Examples:** ``` "name%3DNathan%20Automat".urlDecode() //=> "name=Nathan Automat" ``` ``` "name%3DNathan%20Automat".urlDecode(true) //=> "name%3DNathan Automat" ``` ## *`String`*.**`urlEncode()`** **Description:** Encodes the string so that it can be used in a URL. Spaces and special characters are replaced with codes of the form `%XX`. **Syntax:** *`String`*.urlEncode(allChars?) **Returns:** String **Source:** Custom n8n functionality **Parameters:** - `allChars` (Boolean) - optional - Whether to encode characters that are part of the URI syntax (e.g. `=`, `?`) **Examples:** ``` "name=Nathan Automat".urlEncode() //=> "name%3DNathan%20Automat" ``` ``` "name=Nathan Automat".urlEncode(true) //=> "name=Nathan%20Automat" ``` # WorkflowData ## `$workflow`.**`active`** **Description:** Whether the workflow is active **Syntax:** `$workflow`.`$workflow`.**`active`** **Returns:** Boolean **Source:** Custom n8n functionality ## `$workflow`.**`id`** **Description:** The workflow ID. Can also be found in the workflow’s URL. **Syntax:** `$workflow`.`$workflow`.**`id`** **Returns:** String **Source:** Custom n8n functionality ## `$workflow`.**`name`** **Description:** The name of the workflow, as shown at the top of the editor **Syntax:** `$workflow`.`$workflow`.**`name`** **Returns:** String **Source:** Custom n8n functionality # Binary data Binary data is any file-type data, such as image files or documents. This page collects resources relating to binary data in n8n. ## Working with binary data in your workflows You can process binary data in n8n workflows. n8n provides nodes to help you work with binary data. You can also use code. ### Nodes There are three key nodes dedicated to handling binary data files: - [Convert to File](../../../integrations/builtin/core-nodes/n8n-nodes-base.converttofile/) to take input data and output it as a file. - [Extract From File](../../../integrations/builtin/core-nodes/n8n-nodes-base.extractfromfile/) to get data from a binary format and convert it to JSON. - [Read/Write Files from Disk](../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/) to read and write files from/to the machine where n8n is running. There are separate nodes for working with XML and HTML data: - [HTML](../../../integrations/builtin/core-nodes/n8n-nodes-base.html/) - [XML](../../../integrations/builtin/core-nodes/n8n-nodes-base.xml/) And nodes for performing common tasks: - [Compression](../../../integrations/builtin/core-nodes/n8n-nodes-base.compression/) - [Edit Image](../../../integrations/builtin/core-nodes/n8n-nodes-base.editimage/) - [FTP](../../../integrations/builtin/core-nodes/n8n-nodes-base.ftp/) You can trigger a workflow based on changes to a local file using the [Local File trigger](../../../integrations/builtin/core-nodes/n8n-nodes-base.localfiletrigger/). To split or concatenate binary data items, use the [data transformation nodes](../../expressions/#other-data-transformation-nodes). ### Code You can use the [Code node](../../../code/code-node/) to manipulate binary data in your workflows. For example, [Get the binary data buffer](../../../code/cookbook/code-node/get-binary-data-buffer/): get the binary data available in your workflow. ## Configure binary data mode when self-hosting You can configure how your self-hosted n8n instance handles binary data using the [Binary data environment variables](../../../hosting/configuration/environment-variables/binary-data/). This includes tasks such as setting the storage path and choosing how to store binary data. Your configuration affects how well n8n scales: [Scaling | Binary data filesystem mode](../../../hosting/scaling/binary-data/). Reading and writing binary files can have security implications. If you want to disable reading and writing binary data, use the `NODES_EXCLUDE` environment variable. Refer to [Environment variables | Nodes](../../../hosting/configuration/environment-variables/nodes/) for more information. # Query JSON with JMESPath [JMESPath](https://jmespath.org/) is a query language for JSON that you can use to extract and transform elements from a JSON document. For full details of how to use JMESPath, refer to the [JMESPath documentation](https://jmespath.org/tutorial.html). ## The `jmespath()` method n8n provides a custom method, `jmespath()`. Use this method to perform a search on a JSON object using the JMESPath query language. The basic syntax is: ``` $jmespath(object, searchString) ``` ``` _jmespath(object, searchString) ``` To help understand what the method does, here is the equivalent longer JavaScript: ``` var jmespath = require('jmespath'); jmespath.search(object, searchString); ``` Expressions must be single-line The longer code example doesn't work in Expressions, as they must be single-line. `object` is a JSON object, such as the output of a previous node. `searchString` is an expression written in the JMESPath query language. The [JMESPath Specification](https://jmespath.org/specification.html#jmespath-specification) provides a list of supported expressions, while their [Tutorial](https://jmespath.org/tutorial.html) and [Examples](https://jmespath.org/examples.html) provide interactive examples. Search parameter order The examples in the [JMESPath Specification](https://jmespath.org/specification.html#jmespath-specification) follow the pattern `search(searchString, object)`. The [JMESPath JavaScript library](https://github.com/jmespath/jmespath.js/), which n8n uses, supports `search(object, searchString)` instead. This means that when using examples from the JMESPath documentation, you may need to change the order of the search function parameters. ## Common tasks This section provides examples for some common operations. More examples, and detailed guidance, are available in [JMESPath's own documentation](https://jmespath.org/tutorial.html). When trying out these examples, you need to set the Code node **Mode** to **Run Once for Each Item**. ### Apply a JMESPath expression to a collection of elements with projections From the [JMESPath projections documentation](https://jmespath.org/tutorial.html#projections): > Projections are one of the key features of JMESPath. Use it to apply an expression to a collection of elements. JMESPath supports five kinds of projections: > > - List Projections > - Slice Projections > - Object Projections > - Flatten Projections > - Filter Projections The following example shows basic usage of list, slice, and object projections. Refer to the [JMESPath projections documentation](https://jmespath.org/tutorial.html#projections) for detailed explanations of each projection type, and more examples. Given this JSON from a webhook node: ``` [ { "headers": { "host": "n8n.instance.address", ... }, "params": {}, "query": {}, "body": { "people": [ { "first": "James", "last": "Green" }, { "first": "Jacob", "last": "Jones" }, { "first": "Jayden", "last": "Smith" } ], "dogs": { "Fido": { "color": "brown", "age": 7 }, "Spot": { "color": "black and white", "age": 5 } } } } ] ``` Retrieve a [list](https://jmespath.org/tutorial.html#list-and-slice-projections) of all the people's first names: ``` {{$jmespath($json.body.people, "[*].first" )}} // Returns ["James", "Jacob", "Jayden"] ``` ``` let firstNames = $jmespath($json.body.people, "[*].first" ) return {firstNames}; /* Returns: [ { "firstNames": [ "James", "Jacob", "Jayden" ] } ] */ ``` ``` firstNames = _jmespath(_json.body.people, "[*].first" ) return {"firstNames":firstNames} """ Returns: [ { "firstNames": [ "James", "Jacob", "Jayden" ] } ] """ ``` Get a [slice](https://jmespath.org/tutorial.html#list-and-slice-projections) of the first names: ``` {{$jmespath($json.body.people, "[:2].first")}} // Returns ["James", "Jacob"] ``` ``` let firstTwoNames = $jmespath($json.body.people, "[:2].first"); return {firstTwoNames}; /* Returns: [ { "firstNames": [ "James", "Jacob", "Jayden" ] } ] */ ``` ``` firstTwoNames = _jmespath(_json.body.people, "[:2].first" ) return {"firstTwoNames":firstTwoNames} """ Returns: [ { "firstTwoNames": [ "James", "Jacob" ] } ] """ ``` Get a list of the dogs' ages using [object projections](https://jmespath.org/tutorial.html#object-projections): ``` {{$jmespath($json.body.dogs, "*.age")}} // Returns [7,5] ``` ``` let dogsAges = $jmespath($json.body.dogs, "*.age"); return {dogsAges}; /* Returns: [ { "dogsAges": [ 7, 5 ] } ] */ ``` ``` dogsAges = _jmespath(_json.body.dogs, "*.age") return {"dogsAges": dogsAges} """ Returns: [ { "dogsAges": [ 7, 5 ] } ] """ ``` ### Select multiple elements and create a new list or object Use [Multiselect](https://jmespath.org/tutorial.html#multiselect) to select elements from a JSON object and combine them into a new list or object. Given this JSON from a webhook node: ``` [ { "headers": { "host": "n8n.instance.address", ... }, "params": {}, "query": {}, "body": { "people": [ { "first": "James", "last": "Green" }, { "first": "Jacob", "last": "Jones" }, { "first": "Jayden", "last": "Smith" } ], "dogs": { "Fido": { "color": "brown", "age": 7 }, "Spot": { "color": "black and white", "age": 5 } } } } ] ``` Use multiselect list to get the first and last names and create new lists containing both names: ``` {{$jmespath($json.body.people, "[].[first, last]")}} // Returns [["James","Green"],["Jacob","Jones"],["Jayden","Smith"]] ``` ``` let newList = $jmespath($json.body.people, "[].[first, last]"); return {newList}; /* Returns: [ { "newList": [ [ "James", "Green" ], [ "Jacob", "Jones" ], [ "Jayden", "Smith" ] ] } ] */ ``` ``` newList = _jmespath(_json.body.people, "[].[first, last]") return {"newList":newList} """ Returns: [ { "newList": [ [ "James", "Green" ], [ "Jacob", "Jones" ], [ "Jayden", "Smith" ] ] } ] """ ``` ### An alternative to arrow functions in expressions For example, generate some input data by returning the below code from the Code node: ``` return[ { "json": { "num_categories": "0", "num_products": "45", "category_id": 5529735, "parent_id": 1407340, "pos_enabled": 1, "pos_favorite": 0, "name": "HP", "description": "", "image": "" } }, { "json": { "num_categories": "0", "num_products": "86", "category_id": 5529740, "parent_id": 1407340, "pos_enabled": 1, "pos_favorite": 0, "name": "Lenovo", "description": "", "image": "" } } ] ``` You could do a search like "find the item with the name Lenovo and tell me their category ID." ``` {{ $jmespath($("Code").all(), "[?json.name=='Lenovo'].json.category_id") }} ``` # Date and time with Luxon [Luxon](https://github.com/moment/luxon/) is a JavaScript library that makes it easier to work with date and time. For full details of how to use Luxon, refer to [Luxon's documentation](https://moment.github.io/luxon/#/?id=luxon). n8n passes dates between nodes as strings, so you need to parse them. Luxon makes this easier. Python support Luxon is a JavaScript library. The two convenience [variables](#get-the-current-datetime-or-date) created by n8n are available when using Python in the Code node, but their functionality is limited: - You can't perform Luxon operations on these variables. For example, there is no Python equivalent for `$today.minus(...)`. - The generic Luxon functionality, such as [Convert date string to Luxon](#convert-date-string-to-luxon), isn't available for Python users. ## Date and time behavior in n8n Be aware of the following: - In a workflow, n8n converts dates and times to strings between nodes. Keep this in mind when doing arithmetic on dates and times from other nodes. - Using Luxon's `DateTime()` is the recommended approach in n8n. Using vanilla JavaScript's `Date()` doesn't work with some n8n features. For example, it doesn't respect the [Workflow-specific Time Zone](https://docs.n8n.io/workflows/settings/#timezone). - With vanilla JavaScript, you can convert a string to a date with `new Date('2019-06-23')`. In Luxon, you must use a function explicitly stating the format, such as `DateTime.fromISO('2019-06-23')` or `DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")`. ## Setting the timezone in n8n Luxon uses the n8n timezone. This value is either: - Default: `America/New York` - A custom timezone for your n8n instance, set using the `GENERIC_TIMEZONE` environment variable. - A custom timezone for an individual workflow, configured in workflow settings. ## Common tasks This section provides examples for some common operations. More examples, and detailed guidance, are available in [Luxon's own documentation](https://moment.github.io/luxon/#/?id=luxon). ### Get the current datetime or date Use the `$now` and `$today` Luxon objects to get the current time or day: - `now`: a Luxon object containing the current timestamp. Equivalent to `DateTime.now()`. - `today`: a Luxon object containing the current timestamp, rounded down to the day. Equivalent to `DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 })`. Note that these variables can return different time formats when cast as a string: ``` {{$now}} // n8n displays the ISO formatted timestamp // For example 2022-03-09T14:02:37.065+00:00 {{"Today's date is " + $now}} // n8n displays "Today's date is " // For example "Today's date is 1646834498755" ``` ``` $now // n8n displays // For example 2022-03-09T14:00:25.058+00:00 let rightNow = "Today's date is " + $now // n8n displays "Today's date is " // For example "Today's date is 1646834498755" ``` ``` _now # n8n displays # For example 2022-03-09T14:00:25.058+00:00 rightNow = "Today's date is " + str(_now) # n8n displays "Today's date is " # For example "Today's date is 1646834498755" ``` n8n provides built-in convenience functions to support data transformation in expressions for dates. Refer to [Expression reference](../../expression-reference/) for more information. ### Convert JavaScript dates to Luxon To convert a native JavaScript date to a Luxon date: - In expressions, use the `.toDateTime()` method. For example, `{{ (new Date()).toDateTime() }}`. - In the Code node, use `DateTime.fromJSDate()`. For example, `let luxondate = DateTime.fromJSDate(new Date())`. ### Convert date string to Luxon You can convert date strings and other date formats to a Luxon DateTime object. You can convert from standard formats and from arbitrary strings. A difference between Luxon DateTime and JavaScript Date With vanilla JavaScript, you can convert a string to a date with `new Date('2019-06-23')`. In Luxon, you must use a function explicitly stating the format, such as `DateTime.fromISO('2019-06-23')` or `DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")`. #### If you have a date in a supported standard technical format: Most dates use `fromISO()`. This creates a Luxon DateTime from an ISO 8601 string. For example: ``` {{DateTime.fromISO('2019-06-23T00:00:00.00')}} ``` ``` let luxonDateTime = DateTime.fromISO('2019-06-23T00:00:00.00') ``` Luxon's API documentation has more information on [fromISO](https://moment.github.io/luxon/api-docs/index.html#datetimefromiso). Luxon provides functions to handle conversions for a range of formats. Refer to Luxon's guide to [Parsing technical formats](https://moment.github.io/luxon/#/parsing?id=parsing-technical-formats) for details. #### If you have a date as a string that doesn't use a standard format: Use Luxon's [Ad-hoc parsing](https://moment.github.io/luxon/#/parsing?id=ad-hoc-parsing). To do this, use the `fromFormat()` function, providing the string and a set of [tokens](https://moment.github.io/luxon/#/parsing?id=table-of-tokens) that describe the format. For example, you have n8n's founding date, 23rd June 2019, formatted as `23-06-2019`. You want to turn this into a Luxon object: ``` {{DateTime.fromFormat("23-06-2019", "dd-MM-yyyy")}} ``` ``` let newFormat = DateTime.fromFormat("23-06-2019", "dd-MM-yyyy") ``` When using ad-hoc parsing, note Luxon's warning about [Limitations](https://moment.github.io/luxon/#/parsing?id=limitations). If you see unexpected results, try their [Debugging](https://moment.github.io/luxon/#/parsing?id=debugging) guide. ### Get n days from today Get a number of days before or after today. For example, you want to set a field to always show the date seven days before the current date. In the expressions editor, enter: ``` {{$today.minus({days: 7})}} ``` On the 23rd June 2019, this returns `[Object: "2019-06-16T00:00:00.000+00:00"]`. This example uses n8n's custom variable `$today` for convenience. It's the equivalent of `DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).minus({days: 7})`. For example, you want a variable containing the date seven days before the current date. In the code editor, enter: ``` let sevenDaysAgo = $today.minus({days: 7}) ``` On the 23rd June 2019, this returns `[Object: "2019-06-16T00:00:00.000+00:00"]`. This example uses n8n's custom variable `$today` for convenience. It's the equivalent of `DateTime.now().set({ hour: 0, minute: 0, second: 0, millisecond: 0 }).minus({days: 7})`. For more detailed information and examples, refer to: - Luxon's [guide to math](https://moment.github.io/luxon/#/math) - Their API documentation on [DateTime plus](https://moment.github.io/luxon/api-docs/index.html#datetimeplus) and [DateTime minus](https://moment.github.io/luxon/api-docs/index.html#datetimeminus) ### Create human-readable dates In [Get n days from today](#get-n-days-from-today), the example gets the date seven days before the current date, and returns it as `[Object: "yyyy-mm-dd-T00:00:00.000+00:00"]` (for expressions) or `yyyy-mm-dd-T00:00:00.000+00:00` (in the Code node). To make this more readable, you can use Luxon's formatting functions. For example, you want the field containing the date to be formatted as DD/MM/YYYY, so that on the 23rd June 2019, it returns `23/06/2019`. This expression gets the date seven days before today, and converts it to the DD/MM/YYYY format. ``` {{$today.minus({days: 7}).toLocaleString()}} ``` ``` let readableSevenDaysAgo = $today.minus({days: 7}).toLocaleString() ``` You can alter the format. For example: ``` {{$today.minus({days: 7}).toLocaleString({month: 'long', day: 'numeric', year: 'numeric'})}} ``` On 23rd June 2019, this returns "16 June 2019". ``` let readableSevenDaysAgo = $today.minus({days: 7}).toLocaleString({month: 'long', day: 'numeric', year: 'numeric'}) ``` On 23rd June 2019, this returns "16 June 2019". Refer to Luxon's guide on [toLocaleString (strings for humans)](https://moment.github.io/luxon/#/formatting?id=tolocalestring-strings-for-humans) for more information. ### Get the time between two dates To get the time between two dates, use Luxon's diffs feature. This subtracts one date from another and returns a duration. For example, get the number of months between two dates: ``` {{DateTime.fromISO('2019-06-23').diff(DateTime.fromISO('2019-05-23'), 'months').toObject()}} ``` This returns `[Object: {"months":1}]`. ``` let monthsBetweenDates = DateTime.fromISO('2019-06-23').diff(DateTime.fromISO('2019-05-23'), 'months').toObject() ``` This returns `{"months":1}`. Refer to Luxon's [Diffs](https://moment.github.io/luxon/#/math?id=diffs) for more information. ### A longer example: How many days to Christmas? This example brings together several Luxon features, uses JMESPath, and does some basic string manipulation. The scenario: you want a countdown to 25th December. Every day, it should tell you the number of days remaining to Christmas. You don't want to update it for next year - it needs to seamlessly work for every year. ``` {{"There are " + $today.diff(DateTime.fromISO($today.year + '-12-25'), 'days').toObject().days.toString().substring(1) + " days to Christmas!"}} ``` This outputs `"There are days to Christmas!"`. For example, on 9th March, it outputs "There are 291 days to Christmas!". A detailed explanation of what the expression does: - `{{`: indicates the start of the expression. - `"There are "`: a string. - `+`: used to join two strings. - `$today.diff()`: This is similar to the example in [Get the time between two dates](#get-the-time-between-two-dates), but it uses n8n's custom `$today` variable. - `DateTime.fromISO($today.year + '-12-25'), 'days'`: this part gets the current year using `$today.year`, turns it into an ISO string along with the month and date, and then takes the whole ISO string and converts it to a Luxon DateTime data structure. It also tells Luxon that you want the duration in days. - `toObject()` turns the result of diff() into a more usable object. At this point, the expression returns `[Object: {"days":-}]`. For example, on 9th March, `[Object: {"days":-291}]`. - `.days` uses JMESPath syntax to retrieve just the number of days from the object. For more information on using JMESPath with n8n, refer to our [JMESpath](../jmespath/) documentation. This gives you the number of days to Christmas, as a negative number. - `.toString().substring(1)` turns the number into a string and removes the `-`. - `+ " days to Christmas!"`: another string, with a `+` to join it to the previous string. - `}}`: indicates the end of the expression. ``` let daysToChristmas = "There are " + $today.diff(DateTime.fromISO($today.year + '-12-25'), 'days').toObject().days.toString().substring(1) + " days to Christmas!"; ``` This outputs `"There are days to Christmas!"`. For example, on 9th March, it outputs "There are 291 days to Christmas!". A detailed explanation of what the code does: - `"There are "`: a string. - `+`: used to join two strings. - `$today.diff()`: This is similar to the example in [Get the time between two dates](#get-the-time-between-two-dates), but it uses n8n's custom `$today` variable. - `DateTime.fromISO($today.year + '-12-25'), 'days'`: this part gets the current year using `$today.year`, turns it into an ISO string along with the month and date, and then takes the whole ISO string and converts it to a Luxon DateTime data structure. It also tells Luxon that you want the duration in days. - `toObject()` turns the result of diff() into a more usable object. At this point, the expression returns `[Object: {"days":-}]`. For example, on 9th March, `[Object: {"days":-291}]`. - `.days` uses JMESPath syntax to retrieve just the number of days from the object. For more information on using JMESPath with n8n, refer to our [JMESpath](../jmespath/) documentation. This gives you the number of days to Christmas, as a negative number. - `.toString().substring(1)` turns the number into a string and removes the `-`. - `+ " days to Christmas!"`: another string, with a `+` to join it to the previous string. # Flow logic n8n allows you to represent complex logic in your workflows. This section covers: - [Splitting with conditionals](/flow-logic/splitting/) - [Merging data](/flow-logic/merging/) - [Looping](/flow-logic/looping/) - [Waiting](/flow-logic/waiting/) - [Sub-workflows](/flow-logic/subworkflows/) - [Error handling](/flow-logic/error-handling/) - [Execution order in multi-branch workflows](/flow-logic/execution-order/) ## Related sections You need some understanding of [Data](../data/) in n8n, including [Data structure](../data/data-structure/) and [Data flow within nodes](../data/data-structure/#how-data-flows-within-nodes). When building your logic, you'll use n8n's [Core nodes](../integrations/builtin/core-nodes/), including: - Splitting: [IF](../integrations/builtin/core-nodes/n8n-nodes-base.if/) and [Switch](../integrations/builtin/core-nodes/n8n-nodes-base.switch/). - Merging: [Merge](../integrations/builtin/core-nodes/n8n-nodes-base.merge/), [Compare Datasets](../integrations/builtin/core-nodes/n8n-nodes-base.comparedatasets/), and [Code](../integrations/builtin/core-nodes/n8n-nodes-base.code/). - Looping: [IF](../integrations/builtin/core-nodes/n8n-nodes-base.if/) and [Loop Over Items](../integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/). - Waiting: [Wait](../integrations/builtin/core-nodes/n8n-nodes-base.wait/). - Creating sub-workflows: [Execute Workflow](../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) and [Execute Workflow Trigger](../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflowtrigger/). - Error handling: [Stop And Error](../integrations/builtin/core-nodes/n8n-nodes-base.stopanderror/) and [Error Trigger](../integrations/builtin/core-nodes/n8n-nodes-base.errortrigger/). # Error handling When designing your flow logic, it's a good practice to consider potential errors, and set up methods to handle them gracefully. With an error workflow, you can control how n8n responds to a workflow execution failure. Investigating errors To investigate failed executions, you can: - Review your [Executions](../../workflows/executions/), for a [single workflow](../../workflows/executions/single-workflow-executions/) or [all workflows you have access to](../../workflows/executions/all-executions/). You can [load data from previous execution](../../workflows/executions/debug/) into your current workflow. - Enable [Log streaming](../../log-streaming/). ## Create and set an error workflow For each workflow, you can set an error workflow in **Workflow Settings**. It runs if an execution fails. This means you can, for example, send email or Slack alerts when a workflow execution errors. The error workflow must start with the [Error Trigger](../../integrations/builtin/core-nodes/n8n-nodes-base.errortrigger/). You can use the same error workflow for multiple workflows. 1. Create a new workflow, with the Error Trigger as the first node. 1. Give the workflow a name, for example `Error Handler`. 1. Select **Save**. 1. In the workflow where you want to use this error workflow: 1. Select **Options** > **Settings**. 1. In **Error workflow**, select the workflow you just created. For example, if you used the name Error Handler, select **Error handler**. 1. Select **Save**. Now, when this workflow errors, the related error workflow runs. ## Error data The default error data received by the Error Trigger is: ``` [ { "execution": { "id": "231", "url": "https://n8n.example.com/execution/231", "retryOf": "34", "error": { "message": "Example Error Message", "stack": "Stacktrace" }, "lastNodeExecuted": "Node With Error", "mode": "manual" }, "workflow": { "id": "1", "name": "Example Workflow" } } ] ``` All information is always present, except: - `execution.id`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute. - `execution.url`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute. - `execution.retryOf`: only present when the execution is a retry of a failed execution. If the error is caused by the trigger node of the main workflow, rather than a later stage, the data sent to the error workflow is different. There's less information in `execution{}` and more in `trigger{}`: ``` { "trigger": { "error": { "context": {}, "name": "WorkflowActivationError", "cause": { "message": "", "stack": "" }, "timestamp": 1654609328787, "message": "", "node": { . . . } }, "mode": "trigger" }, "workflow": { "id": "", "name": "" } } ``` ## Cause a workflow execution failure using Stop And Error When you create and set an error workflow, n8n runs it when an execution fails. Usually, this is due to things like errors in node settings, or the workflow running out of memory. You can add the [Stop And Error](../../integrations/builtin/core-nodes/n8n-nodes-base.stopanderror/) node to your workflow to force executions to fail under your chosen circumstances, and trigger the error workflow. # Execution order in multi-branch workflows n8n's node execution order depends on the version of n8n you're using: - For workflows created before version 1.0: n8n executes the first node of each branch, then the second node of each branch, and so on. - For workflows created in version 1.0 and above: executes each branch in turn, completing one branch before starting another. n8n orders the branches based on their position on the [canvas](../../glossary/#canvas-n8n), from topmost to bottommost. If two branches are at the same height, the leftmost branch executes first. You can change the execution order in your [workflow settings](../../workflows/settings/). # Looping in n8n Looping is useful when you want to process multiple items or perform an action repeatedly, such as sending a message to every contact in your address book. n8n handles this repetitive processing automatically, meaning you don't need to specifically build loops into your workflows. There are [some nodes](#node-exceptions) where this isn't true. ## Using loops in n8n n8n nodes take any number of items as input, process these items, and output the results. You can think of each item as a single data point, or a single row in the output table of a node. Nodes usually run once for each item. For example, if you wanted to send the name and notes of the customers in the Customer Datastore node as a message on Slack, you would: 1. Connect the Slack node to the Customer Datastore node. 1. Configure the parameters. 1. Execute the node. You would receive five messages: one for each item. This is how you can process multiple items without having to explicitly connect nodes in a loop. ### Executing nodes once For situations where you don't want a node to process all received items, for example sending a Slack message only to the first customer, you can do so by toggling the **Execute Once** parameter in the **Settings** tab of that node This setting is helpful when the incoming data contains multiple items and you want to only process the first one. ## Creating loops n8n typically handles the iteration for all incoming items. However, there are certain scenarios where you will have to create a loop to iterate through all items. Refer to [Node exceptions](#node-exceptions) for a list of nodes that don't automatically iterate over all incoming items. ### Loop until a condition is met To create a loop in an n8n workflow, connect the output of one node to the input of a previous node. Add an [IF](../../integrations/builtin/core-nodes/n8n-nodes-base.if/) node to check when to stop the loop. Here is an [example workflow](https://n8n.io/workflows/1130) that implements a loop with an `IF` node: ### Loop until all items are processed Use the [Loop Over Items](../../integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/) node when you want to loop until all items are processed. To process each item individually, set **Batch Size** to `1`. You can batch the data in groups and process these batches. This approach is useful for avoiding API rate limits when processing large incoming data or when you want to process a specific group of returned items. The Loop Over Items node stops executing after all the incoming items get divided into batches and passed on to the next node in the workflow so it's not necessary to add an IF node to stop the loop. ## Node exceptions Nodes and operations where you need to design a loop into your workflow: - [CrateDB](../../integrations/builtin/app-nodes/n8n-nodes-base.cratedb/) executes once for `insert` and `update`. - [Code](../../integrations/builtin/core-nodes/n8n-nodes-base.code/) node in **Run Once for All Items** mode: processes all the items based on the entered code snippet. - [Execute Workflow](../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) node in **Run Once for All Items** mode. - [HTTP Request](../../integrations/builtin/core-nodes/n8n-nodes-base.httprequest/): you must handle pagination yourself. If your API call returns paginated results you must create a loop to fetch one page at a time. - [Microsoft SQL](../../integrations/builtin/app-nodes/n8n-nodes-base.microsoftsql/) executes once for `insert`, `update`, and `delete`. - [MongoDB](../../integrations/builtin/app-nodes/n8n-nodes-base.mongodb/) executes once for `insert` and `update`. - [QuestDB](../../integrations/builtin/app-nodes/n8n-nodes-base.questdb/) executes once for `insert`. - [Redis](../../integrations/builtin/app-nodes/n8n-nodes-base.redis/): - Info: this operation executes only once, regardless of the number of items in the incoming data. - [RSS Read](../../integrations/builtin/core-nodes/n8n-nodes-base.rssfeedread/) executes once for the requested URL. - [TimescaleDB](../../integrations/builtin/app-nodes/n8n-nodes-base.timescaledb/) executes once for `insert` and `update`. # Merging data Merging brings multiple data streams together. You can achieve this using different nodes depending on your workflow requirements. - Merge data from different data streams or nodes: Use the [Merge](../../integrations/builtin/core-nodes/n8n-nodes-base.merge/) node to combine data from various sources into one. - Merge data from multiple node executions: Use the [Code](../../integrations/builtin/core-nodes/n8n-nodes-base.code/) node for complex scenarios where you need to merge data from multiple executions of a node or multiple nodes. - Compare and merge data: Use the [Compare Datasets](../../integrations/builtin/core-nodes/n8n-nodes-base.comparedatasets/) node to compare, merge, and output data streams based on the comparison. Explore each method in more detail in the sections below. ## Merge data from different data streams If your workflow [splits](../splitting/), you combine the separate streams back into one stream. Here's an [example workflow](https://n8n.io/workflows/1747-joining-different-datasets/) showing different types of merging: appending data sets, keeping only new items, and keeping only existing items. The [Merge node](../../integrations/builtin/core-nodes/n8n-nodes-base.merge/) documentation contains details on each of the merge operations. [View template details](https://n8n.io/workflows/1747-joining-different-datasets/) ## Merge data from different nodes You can use the Merge node to combine data from two previous nodes, even if the workflow hasn't split into separate data streams. This can be useful if you want to generate a single dataset from the data generated by multiple nodes. Merging data from two previous nodes ## Merge data from multiple node executions Use the Code node to merge data from multiple node executions. This is useful in some [Looping](../looping/) scenarios. Node executions and workflow executions This section describes merging data from multiple node executions. This is when a node executes multiple times during a single workflow execution. Refer to this [example workflow](https://n8n.io/workflows/1814-merge-multiple-runs-into-one/) using Loop Over Items and Wait to artificially create multiple executions. [View template details](https://n8n.io/workflows/1814-merge-multiple-runs-into-one/) ## Compare, merge, and split again The [Compare Datasets](../../integrations/builtin/core-nodes/n8n-nodes-base.comparedatasets/) node compares data streams before merging them. It outputs up to four different data streams. Refer to this [example workflow](https://n8n.io/workflows/1943-comparing-data-with-the-compare-datasets-node/) for an example. [View template details](https://n8n.io/workflows/1943-comparing-data-with-the-compare-datasets-node/) # Splitting workflows with conditional nodes Splitting uses the [IF](../../integrations/builtin/core-nodes/n8n-nodes-base.if/) or [Switch](../../integrations/builtin/core-nodes/n8n-nodes-base.switch/) nodes. It turns a single-branch workflow into a multi-branch workflow. This is a key piece of representing complex logic in n8n. Compare these workflows: This is the power of splitting and conditional nodes in n8n. Refer to the [IF](../../integrations/builtin/core-nodes/n8n-nodes-base.if/) or [Switch](../../integrations/builtin/core-nodes/n8n-nodes-base.switch/) documentation for usage details. # Sub-workflows You can call one workflow from another workflow. This allows you to build modular, microservice-like workflows. It can also help if your workflow grows large enough to encounter [memory issues](../../hosting/scaling/memory-errors/). Creating sub-workflows uses the [Execute Workflow](../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) and [Execute Sub-workflow Trigger](../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflowtrigger/) nodes. Sub-wokflow executions don't count towards your plan's monthly execution or active workflow limits. ## Set up and use a sub-workflow This section walks through setting up both the parent workflow and sub-workflow. ### Create the sub-workflow 1. Create a new workflow. Create sub-workflows from existing workflows You can optionally create a sub-workflow directly from an existing parent workflow using the [Execute Sub-workflow](../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) node. In the node, select the **Database** and **From list** options and select **Create a sub-workflow** in the list. You can also extract selected nodes directly using [Sub-workflow conversion](../../workflows/subworkflow-conversion/) in the context menu. 1. **Optional**: configure which workflows can call the sub-workflow: 1. Select the **Options** menu > **Settings**. n8n opens the **Workflow settings** modal. 1. Change the **This workflow can be called by** setting. Refer to [Workflow settings](../../workflows/settings/) for more information on configuring your workflows. 1. Add the **Execute Sub-workflow** trigger node (if you are searching under trigger nodes, this is also titled **When Executed by Another Workflow**). 1. Set the **Input data mode** to choose how you will define the sub-workflow's input data: - **Define using fields below**: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The [Execute Sub-workflow node](../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) or [Call n8n Workflow Tool node](../../integrations/builtin/cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) in the calling workflow will automatically pull in the fields defined here. - **Define using JSON example**: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types. - **Accept all data**: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values. 1. Add other nodes as needed to build your sub-workflow functionality. 1. Save the sub-workflow. Sub-workflow mustn't contain errors If there are errors in the sub-workflow, the parent workflow can't trigger it. Load data into sub-workflow before building This requires the ability to [load data from previous executions](../../workflows/executions/debug/), which is available on n8n Cloud and registered Community plans. If you want to load data into your sub-workflow to use while building it: 1. Create the sub-workflow and add the **Execute Sub-workflow Trigger**. 1. Set the node's **Input data mode** to **Accept all data** or define the input items using fields or JSON if they're already known. 1. In the sub-workflow [settings](../../workflows/settings/), set **Save successful production executions** to **Save**. 1. Skip ahead to setting up the parent workflow, and run it. 1. Follow the steps to [load data from previous executions](../../workflows/executions/debug/). 1. Adjust the **Input data mode** to match the input sent by the parent workflow if necessary. You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow. ### Call the sub-workflow 1. Open the workflow where you want to call the sub-workflow. 1. Add the **Execute Sub-workflow** node. 1. In the **Execute Sub-workflow** node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL. Find your workflow ID Your sub-workflow's ID is the alphanumeric string at the end of its URL. 1. Fill in the required input items defined by the sub-workflow. 1. Save your workflow. When your workflow executes, it will send data to the sub-workflow, and run it. You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the **View sub-execution** link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction. ## How data passes between workflows As an example, imagine you have an Execute Sub-workflow node in **Workflow A**. The Execute Sub-workflow node calls another workflow called **Workflow B**: 1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of **Workflow B**. 1. The last node of **Workflow B** sends the data back to the Execute Sub-workflow node in **Workflow A**. ## Sub-workflow conversion See [sub-workflow conversion](../../workflows/subworkflow-conversion/) for how to divide your existing workflows into sub-workflows. # Waiting Waiting allows you to pause a workflow mid-execution, then resume where the workflow left off, with the same data. This is useful if you need to rate limit your calls to a service, or wait for an external event to complete. You can wait for a specified duration, or until a webhook fires. Making a workflow wait uses the [Wait](../../integrations/builtin/core-nodes/n8n-nodes-base.wait/) node. Refer to the node documentation for usage details. n8n provides a workflow template with a basic example of [Rate limiting and waiting for external events](https://n8n.io/workflows/1749-rate-limiting-and-waiting-for-external-events/). # How can you contribute? There are a several ways in which you can contribute to n8n, depending on your skills and interests. Each form of contribution is valuable to us! ## Share some love: Review us - Star n8n on [GitHub](https://github.com/n8n-io/n8n) and [Docker Hub](https://hub.docker.com/r/n8nio/n8n). - Follow us on [Twitter](https://twitter.com/n8n_io), [LinkedIn](https://www.linkedin.com/company/28491094), and [Facebook](https://www.facebook.com/n8nio/). - Upvote n8n on [AlternativeTo](https://alternativeto.net/software/n8n-io/) and [Alternative.me](https://alternative.me/n8n-io). - Add n8n to your stack on [Stackshare](https://stackshare.io/n8n). - Write a review about n8n on [G2](https://www.g2.com/products/n8n/reviews), [Slant](https://www.slant.co/improve/options/37977/~n8n-review), and [Capterra](https://www.capterra.com/p/198028/n8n-io/). ## Help out the community You can participate in the [forum](https://community.n8n.io/) and help the community members out with their questions. When sharing workflows in the community forum for debugging, use code blocks. Use triple backticks ```` ``` ```` to wrap the workflow JSON in a code block. The following video demonstrates the steps of sharing workflows on the community forum: ## Contribute a workflow template You can submit your workflows to n8n's template library. n8n is working on a creator program, and developing a marketplace of templates. This is an ongoing project, and details are likely to change. Refer to [n8n Creator hub](https://www.notion.so/n8n/n8n-Creator-hub-7bd2cbe0fce0449198ecb23ff4a2f76f) for information on how to submit templates and become a creator. ## Build a node Create an integration for a third party service. Check out [the node creation docs](../../integrations/creating-nodes/overview/) for guidance on how to create and publish a community node. ## Contribute to the code There are different ways in which you can contribute to the n8n code base: - Fix [issues](https://github.com/n8n-io/n8n/issues) reported on GitHub. The [CONTRIBUTING guide](https://github.com/n8n-io/n8n/blob/master/CONTRIBUTING.md) will help you get your development environment ready in minutes. - Add additional functionality to an existing third party integration. - Add a new feature to n8n. ## Contribute to the docs You can contribute to the n8n documentation, for example by documenting nodes or fixing issues. The repository for the docs is [here](https://github.com/n8n-io/n8n-docs) and the guidelines for contributing to the docs are [here](https://github.com/n8n-io/n8n-docs/blob/master/CONTRIBUTING.md). ## Contribute to community tutorials Share your own video or written guides on our [community-driven, searchable library of n8n tutorials and training materials](https://community.n8n.io/t/how-to-share-your-tutorials/48398). Tag them for easy discovery, and post in your language’s subcategory. Follow the contribution guidelines to help keep our growing library high-quality and accessible to everyone. ## Refer a candidate Do you know someone who would be a great fit for one of our [open positions](https://n8n.io/careers)? Refer them to us! In return, we'll pay you €1,000 when the referral successfully passes their probationary period. Here's how this works: 1. **Search**: Have a look at the description and requirements of each role, and consider if someone you know would be a great fit. 1. **Referral**: Once you've identified a potential candidate, send an email to [Jobs at n8n](mailto:jobs@n8n.io) with the subject line *Employee referral - [job title]* and a short description of the person you're referring (and the reason why). Also, tell your referral to apply for the job through our [careers page](https://n8n.io/careers). 1. **Evaluation**: We'll screen the application and inform you about the next steps of the hiring process. 1. **Reward**: As soon as your referral has successfully finished the probationary period, we'll reward you for your efforts by transferring the €1,000 to your bank account. # Get help with n8n ## Where to get help n8n provides different support options depending on your plan and the nature of your problem. ### n8n community forum n8n provides free community support for all n8n users through the [forum](https://community.n8n.io/). This is the best source for answers of all kinds, as both the n8n support team and community members can help. ### Email support n8n offers email support through the [help@n8n.io](mailto:help@n8n.io) for the following plans: - [Enterprise plans](https://n8n.io/enterprise/) can use email support with an SLA for technical, account, billing, and other inquiries. - Other [Cloud plans](https://n8n.io/pricing/) can use email support for admin and billing issues. For technical support, please refer to the forum. ## What to include in your message When posting to the forum or emailing customer support, you'll get help faster if you provide details in your first message about your n8n instance and the issue you're experiencing. ### Your n8n instance details To collect basic information about your n8n instance: 1. Open the left-side panel. 1. Select **Help**. 1. Select **About n8n**. 1. The **About n8n** modal opens to display your current information. 1. Select **Copy debug information** to copy your information. 1. Include this information in your forum post or support email. ### Details about your problem To help resolve your issues more efficiently, here are some things you can include to provide more context: - **Screenshots or video recordings**: A quick Loom or screen recording that shows what's happening. - **Relevant documentation**: If you've followed any guides or documentation, include links to them in your message. - **n8n Cloud workspace (if possible)**: If contacting support, provide the workspace URL for your n8n Cloud instance. It looks something like `https://xxxxx.n8n.app.cloud`. - **Steps to reproduce the issue**: A simple step-by-step outline of what you did before encountering the issue. - **Workflow or Configuration files**: Sharing relevant workflows or configuration files can be a huge help. It may also be helpful to include a [HAR (HTTP Archive) file]() in your message. You can learn how to generate a HAR file in your browser and how to redact sensitive details before posting using the [Har Analizer](https://toolbox.googleapps.com/apps/har_analyzer/). # Self-hosting n8n This section provides guidance on setting up self-hosted n8n. All self-hosted installations use the same core product. Without a license key, n8n runs as the free Community edition. Adding a Business or Enterprise license key enables those editions. See [Community edition features](community-edition-features/) for a list of available features. - **Installation and server setups** Install n8n on any platform using npm or Docker. Or follow our guides to popular hosting platforms. [Docker installation guide](installation/docker/) - **Configuration** Learn how to configure n8n with environment variables. [Environment Variables](configuration/environment-variables/) - **Users and authentication** Choose and set up user authentication for your n8n instance. [Authentication](configuration/user-management-self-hosted/) - **Scaling** Manage data, modes, and processes to keep n8n running smoothly at scale. [Scaling](scaling/queue-mode/) - **Securing n8n** Secure your n8n instance by setting up SSL, SSO, or 2FA or blocking or opting out of some data collection or features. [Securing n8n guide](securing/overview/) - **Starter kits** New to n8n or AI? Try our Self-hosted AI Starter Kit. Curated by n8n, it combines the self-hosted n8n platform with compatible AI products and components to get you started building self-hosted AI workflows. [Starter kits](starter-kits/ai-starter-kit/) Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). # Server CLI commands The **Server CLI** is a built-in command-line interface that runs on the same machine as your n8n installation. It provides direct database access for administrative tasks and can execute most commands even when n8n isn't running. n8n CLI Looking to interact with n8n programmatically from a remote machine or integrate with AI agents? Check out the [n8n CLI](../../api/n8n-cli/). ## When to use Server CLI vs n8n CLI | Feature | Server CLI | n8n CLI | | ------------------------ | ------------------------------------------------------------- | -------------------------------------------------------- | | **Where it runs** | Same machine as n8n | Any machine with network access | | **Authentication** | Direct database access | API key | | **Requires running n8n** | No (most commands) | Yes | | **Best for** | Instance operators, backups, migrations | Programmers, AI agents, remote management | | **Security model** | Bypasses access controls | Respects user permissions and API key scope | | **Use case examples** | Backup/restore, license management, emergency password resets | Workflow automation, credentials management through code | ## Running CLI commands You can use CLI commands with self-hosted n8n. Depending on how you choose to install n8n, there are differences in how to run the commands: - npm: the `n8n` command is directly available. The documentation uses this in the examples below. - Docker: the `n8n` command is available within your Docker container: ``` docker exec -u node -it ``` ## Start a workflow You can start workflows directly using the CLI. Execute a saved workflow by its ID: ``` n8n execute --id ``` ## Publish or unpublish a workflow You can publish or unpublish a workflow using the CLI. In n8n 2.0, the [previous active/inactive toggle](../../2-0-breaking-changes/) was replaced by a publish/unpublish model. Use `publish:workflow` and `unpublish:workflow` to change a workflow's published state from the CLI. Restart required These commands operate on your n8n database. If you execute them while n8n is running, the changes don't take effect until you restart n8n. ### Publish a workflow Use `publish:workflow` to publish a workflow by its ID. You can optionally publish a specific historical version by passing its `versionId`. Command flags: | Flag | Description | | ----------- | --------------------------------------------------------------------------- | | --help | Help prompt. | | --id | The ID of the workflow to publish. Required. | | --versionId | Optional version ID to publish. If omitted, the current draft is published. | No `--all` flag Unlike the deprecated `update:workflow` command, `publish:workflow` doesn't support `--all`. This is intentional: it prevents accidental bulk publishing of workflows in production environments. Publish workflows individually by ID. Publish the current draft of a workflow by ID: ``` n8n publish:workflow --id= ``` Publish a specific historical version of a workflow: ``` n8n publish:workflow --id= --versionId= ``` ### Unpublish a workflow Use `unpublish:workflow` to unpublish a workflow by its ID, or all workflows at once. Command flags: | Flag | Description | | ------ | ---------------------------------------------------------------- | | --help | Help prompt. | | --id | The ID of the workflow to unpublish. Can't be used with `--all`. | | --all | Unpublish all workflows. Can't be used with `--id`. | Unpublish a workflow by its ID: ``` n8n unpublish:workflow --id= ``` Unpublish all workflows: ``` n8n unpublish:workflow --all ``` ### update:workflow (deprecated) Deprecated in n8n 2.0 The `update:workflow` command is deprecated and will be removed. Use [`publish:workflow`](#publish-a-workflow) and [`unpublish:workflow`](#unpublish-a-workflow) instead. See the [n8n v2.0 breaking changes](../../2-0-breaking-changes/) for details. Set the active status of a workflow by its ID to false: ``` n8n update:workflow --id= --active=false ``` Set the active status of a workflow by its ID to true: ``` n8n update:workflow --id= --active=true ``` Set the active status to false for all the workflows: ``` n8n update:workflow --all --active=false ``` Set the active status to true for all the workflows: ``` n8n update:workflow --all --active=true ``` ## Export entities You can export your database entities from n8n using the CLI. This tooling allows you to export all entity types from one database type, such as SQLite, and import them into another database type, such as Postgres. Command flags: | Flag | Description | | ----------------------------------- | ---------------------------------------------------------------------------------------------- | | --help | Help prompt. | | --outputDir | Output directory path | | --includeExecutionHistoryDataTables | Include execution history data tables, these are excluded by default as they can be very large | ``` n8n export:entities --outputDir=./outputs --includeExecutionHistoryDataTables=true ``` ## Export workflows and credentials You can export your workflows and credentials from n8n using the CLI. Command flags: | Flag | Description | | ------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | --help | Help prompt. | | --all | Exports all workflows/credentials. | | --backup | Sets --all --pretty --separate for backups. You can optionally set --output. | | --id | The ID of the workflow to export. | | --output, -o | Outputs file name or directory if using separate files. | | --pretty | Formats the output in an easier to read fashion. | | --separate | Exports one file per workflow (useful for versioning). Must set a directory using --output. | | --decrypted | Exports the credentials in a plain text format. (Credentials only.) | | --version | The version ID of a specific historical version to export. (Workflows only, can't be used with `--all` or `--published`.) | | --published | Exports the published/active version of the workflow instead of the current draft. When combined with `--all`, unpublished workflows are skipped. (Workflows only, can't be used with `--version`.) | ### Workflows Export all your workflows to the standard output (terminal): ``` n8n export:workflow --all ``` Export a workflow by its ID and specify the output file name: ``` n8n export:workflow --id= --output=file.json ``` Export all workflows to a specific directory in a single file: ``` n8n export:workflow --all --output=backups/latest/file.json ``` Export all the workflows to a specific directory using the `--backup` flag (details above): ``` n8n export:workflow --backup --output=backups/latest/ ``` #### Export a specific workflow version You can export a specific historical version of a workflow by passing its `versionId` with `--version`: ``` n8n export:workflow --id= --version= --output=workflow-v1.json ``` #### Export the published version of a workflow Use `--published` to export the currently published/active version of a workflow rather than the current draft: ``` n8n export:workflow --id= --published --output=published.json ``` You can combine `--published` with `--all` to export every workflow's published version. Workflows that don't have a published version are skipped: ``` n8n export:workflow --all --published --output=workflows.json ``` Version metadata When exporting a workflow, n8n includes a `versionMetadata` property containing the workflow's historical name and description for that version. The import command preserves this data in the workflow history table on import. The current workflow's name and description aren't overridden. ### Credentials Export all your credentials to the standard output (terminal): ``` n8n export:credentials --all ``` Export credentials by their ID and specify the output file name: ``` n8n export:credentials --id= --output=file.json ``` Export all credentials to a specific directory in a single file: ``` n8n export:credentials --all --output=backups/latest/file.json ``` Export all the credentials to a specific directory using the `--backup` flag (details above): ``` n8n export:credentials --backup --output=backups/latest/ ``` Export all the credentials in plain text format. You can use this to migrate from one installation to another that has a different secret key in the configuration file. Sensitive information All sensitive information is visible in the files. ``` n8n export:credentials --all --decrypted --output=backups/decrypted.json ``` ## Import entities You can import entities from a previous `export:entities` command using this command, it allows importing of entities into a database type that differs from the exported database type. Current supported database types include: SQLite, Postgres. The database is expected to be empty prior to import, this can be forced with the `--truncateTables` parameter. Command flags: | Flag | Description | | ---------------- | -------------------------------------------------- | | --help | Help prompt. | | --inputDir | Input directory that holds output files for import | | --truncateTables | Truncate tables before import | ``` n8n import:entities --inputDir ./outputs --truncateTables true ``` ## Import workflows and credentials You can import your workflows and credentials from n8n using the CLI. Update the IDs When exporting workflows and credentials, n8n also exports their IDs. If you have workflows and credentials with the same IDs in your existing database, they will be overwritten. To avoid this, delete or change the IDs before importing. Available flags: | Flag | Description | | --------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | --help | Help prompt. | | --input | Input file name or directory if you use --separate. | | --projectId | Import the workflow or credential to the specified project. Can't be used with `--userId`. | | --separate | Imports `*.json` files from directory provided by --input. | | --userId | Import the workflow or credential to the specified user. Can't be used with `--projectId`. | | --skipMigrationChecks | Skip migration validation checks. | | --activeState | Controls the active state of imported workflows. Accepts `false` (default, deactivates all imported workflows) or `fromJson` (uses the `active` field from each workflow's JSON; multi-main mode only). | Migrating to SQLite n8n limits workflow and credential names to 128 characters, but SQLite doesn't enforce size limits. This might result in errors like **Data too long for column name** during the import process. In this case, you can edit the names from the n8n interface and export again, or edit the JSON file directly before importing. ### Workflows Known issue: cron triggers keep running after import The behaviour of importing a previously active workflow differs depending on the mode you are running. This is a known bug. On multi-main and queue-mode instances the previously active workflow's cron triggers are deactivated on import. On non multi-main instances the previously active workflows cron triggers will remain running until you restart the n8n instance. Import workflows from a specific file: ``` n8n import:workflow --input=file.json ``` Import all the workflow files as JSON from the specified directory: ``` n8n import:workflow --separate --input=backups/latest/ ``` Version metadata on import If the imported file includes a `versionMetadata` property (added by exports that target a specific version or the published version), n8n preserves that historical name and description in the workflow history table. The current workflow entity's name and description are kept as-is. By default, `import:workflow` deactivates every imported workflow. To preserve the `active` field from each JSON file instead, pass `--activeState=fromJson` (only supported in multi-main & queue mode): ``` n8n import:workflow --separate --input=backups/latest/ --activeState=fromJson ``` ### Credentials Import credentials from a specific file: ``` n8n import:credentials --input=file.json ``` Import all the credentials files as JSON from the specified directory: ``` n8n import:credentials --separate --input=backups/latest/ ``` ## License ### Clear Clear your existing license from n8n's database and reset n8n to default features: ``` n8n license:clear ``` If your license includes [floating entitlements](../../glossary/#entitlement-n8n), running this command will also attempt to release them back to the pool, making them available for other instances. ### Info Display information about the existing license: ``` n8n license:info ``` ## User management You can reset user management using the n8n CLI. This returns user management to its pre-setup state. It removes all user accounts. Use this if you forget your password, and don't have SMTP set up to do password resets by email. ``` n8n user-management:reset ``` ### Disable MFA for a user If a user loses their recovery codes you can disable MFA for a user with this command. The user will then be able to log back in to set up MFA again. ``` n8n mfa:disable --email=johndoe@example.com ``` ### Disable LDAP You can reset the LDAP settings using the command below. ``` n8n ldap:reset ``` ## Uninstall community nodes and credentials You can manage [community nodes](../../integrations/community-nodes/installation/) using the n8n CLI. For now, you can only uninstall community nodes and credentials, which is useful if a community node causes instability. Command flags: | Flag | Description | | ------------ | -------------------------------------------------------------------------------------------------------------------------- | | --help | Show CLI help. | | --credential | The credential type. Get this value by visiting the node's `.credential.ts` file and getting the value of `name`. | | --package | Package name of the community node. | | --uninstall | Uninstalls the node. | | --userId | The ID of the user who owns the credential. On self-hosted, query the database. On cloud, query the API with your API key. | ### Nodes Uninstall a community node by package name: ``` n8n community-node --uninstall --package ``` For example, to uninstall the [Evolution API community node](https://www.npmjs.com/package/n8n-nodes-evolution-api), type: ``` n8n community-node --uninstall --package n8n-nodes-evolution-api ``` ### Credentials Uninstall a community node credential: ``` n8n community-node --uninstall --credential --userId ``` For example, to uninstall the [Evolution API community node credential](https://www.npmjs.com/package/n8n-nodes-evolution-api), visit the [repository](https://github.com/oriondesign2015/n8n-nodes-evolution-api) and navigate to the [`credentials.ts` file](https://github.com/oriondesign2015/n8n-nodes-evolution-api/blob/main/credentials/EvolutionApi.credentials.ts) to find the `name`: ``` n8n community-node --uninstall --credential evolutionApi --userId 1234 ``` ## Security audit You can run a [security audit](../securing/security-audit/) on your n8n instance, to detect common security issues. ``` n8n audit ``` # Community Edition Features The community edition includes almost the complete feature set of n8n, except for the features listed here. The community edition doesn't include these features: - [Custom Variables](../../code/variables/) - [Environments](../../source-control-environments/) - [External secrets](../../external-secrets/) - [External storage for binary data](../scaling/external-storage/) - [Log streaming](../../log-streaming/) ([Logging](../logging-monitoring/logging/) *is* included) - [Multi-main mode](../scaling/queue-mode/#multi-main-setup) ([Queue mode](../scaling/queue-mode/) *is* included) - [Projects](../../user-management/rbac/projects/) - SSO ([SAML](../securing/set-up-sso/), [LDAP](../../user-management/ldap/)) - Sharing ([workflows](../../workflows/sharing/), [credentials](../../credentials/credential-sharing/)) (Only the instance owner and the user who creates them can access workflows and credentials) - [Version control using Git](../../source-control-environments/) These features are available on: the Enterprise Cloud plan, including the self-hosted Enterprise edition. Some of these features are available on the Starter and Pro Cloud plans, and the Business self-hosted plan. See [pricing](https://n8n.io/pricing/) for reference. ## Registered Community Edition You can unlock extra features by registering your n8n community edition. You register with your email and receive a license key. Registering unlocks these features for the community edition: - Folders: Organize your workflows into tidy folders - [Debug in editor](../../workflows/executions/debug/): Copy and [pin](../../glossary/#data-pinning-n8n) execution data when working on a workflow - [Custom execution data](../../workflows/executions/custom-executions-data/): Save, find, and annotate execution metadata To register a new community edition instance, select the option during your initial account creation. To register an existing community edition instance: 1. Select the **three dots icon** in the lower-left corner. 1. Select **Settings** and then **Usage and plan**. 1. Select **Unlock** to enter your email and then select **Send me a free license key**. 1. Check your email for the account you entered. Once you have a license key, activate it by clicking the button in the license email or by visiting **Options > Settings > Usage and plan** and selecting **Enter activation key**. Once activated, your license will not expire. We may change the unlocked features in the future. This will not impact previously unlocked features. # Database structure This page describes the purpose of each table in the n8n database. ## Database and query technology By default, n8n uses SQLite as the database. If you are using another database the structure will be similar, but the data-types may be different depending on the database. n8n uses [TypeORM](https://github.com/typeorm/typeorm) for queries and migrations. To inspect the n8n database, you can use [DBeaver](https://dbeaver.io), which is an open-source universal database tool. ## Tables These are the tables n8n creates during setup. ### auth_identity Stores details of external authentication providers when using [SAML](../../../user-management/saml/). ### auth_provider_sync_history Stores the history of a SAML connection. ### credentials_entity Stores the [credentials](../../../glossary/#credential-n8n) used to authenticate with integrations. ### event_destinations Contains the destination configurations for [Log streaming](../../../log-streaming/). ### execution_data Contains the workflow at time of running, and the execution data. ### execution_entity Stores all saved workflow executions. Workflow settings can affect which executions n8n saves. ### execution_metadata Stores [Custom executions data](../../../workflows/executions/custom-executions-data/). ### installed_nodes Lists the [community nodes](../../../integrations/community-nodes/installation/) installed in your n8n instance. ### installed_packages Details of npm community nodes packages installed in your n8n instance. [installed_nodes](#installed_nodes) lists each individual node. `installed_packages` lists npm packages, which may contain more than one node. ### migrations A log of all database migrations. Read more about [Migrations](https://typeorm.io/docs/advanced-topics/migrations/) in TypeORM's documentation. ### project Lists the [projects](../../../user-management/rbac/projects/) in your instance. ### project_relation Describes the relationship between a user and a [project](../../../user-management/rbac/projects/), including the user's [role type](../../../user-management/rbac/role-types/). ### role Not currently used. For use in future work on custom roles. ### settings Records custom instance settings. These are settings that you can't control using environment variables. They include: - Whether the instance owner is set up - Whether the user chose to skip owner and user management setup - Whether certain types of authentication, including SAML and LDAP, are on - License key ### shared_credentials Maps credentials to users. ### shared_workflow Maps workflows to users. ### tag_entity All workflow tags created in the n8n instance. This table lists the tags. [workflows_tags](#workflows_tags) records which workflows have which tags. ### user Contains user data. ### variables Store [variables](../../../code/variables/). ### webhook_entity Records the active webhooks in your n8n instance's workflows. This isn't just webhooks uses in the Webhook node. It includes all active webhooks used by any trigger node. ### workflow_entity Your n8n instance's saved workflows. ### workflow_history Store previous versions of workflows. ### workflow_statistics Counts workflow IDs and their status. ### workflows_tags Maps tags to workflows. [tag_entity](#tag_entity) contains tag details. ## Entity Relationship Diagram (ERD) # Architecture Understanding n8n's underlying architecture is helpful if you need to: - Embed n8n - Customize n8n's default databases This section is a work in progress. If you have questions, please try the [forum](https://community.n8n.io/) and let n8n know which architecture documents would be useful for you. # Configuration You can change n8n's settings using environment variables. For a full list of available configurations see [Environment Variables](../environment-variables/). ## Set environment variables by command line ### npm For npm, set your desired environment variables in terminal. The command depends on your command line. Bash CLIs: ``` export = ``` In cmd.exe: ``` set = ``` In PowerShell: ``` $env:= ``` ### Docker In Docker you can use the `-e` flag from the command line: ``` docker run -it --rm \ --name n8n \ -p 5678:5678 \ -e N8N_TEMPLATES_ENABLED="false" \ docker.n8n.io/n8nio/n8n ``` ## Docker Compose file In Docker, you can set your environment variables in the `n8n: environment:` element of your `docker-compose.yaml` file. For example: ``` n8n: environment: - N8N_TEMPLATES_ENABLED=false ``` ## Keeping sensitive data in separate files You can append `_FILE` to individual environment variables to provide their configuration in a separate file, enabling you to avoid passing sensitive details using environment variables. n8n loads the data from the file with the given name, making it possible to load data from [Docker-Secrets](https://docs.docker.com/engine/swarm/secrets/) and [Kubernetes-Secrets](https://kubernetes.io/docs/concepts/configuration/secret/). Refer to [Environment variables](../environment-variables/) for details on each variable. While most environment variables can use the `_FILE` suffix, it's more beneficial for sensitive data such as [credentials](../../../glossary/#credential-n8n) and database configuration. Here are some examples: ``` CREDENTIALS_OVERWRITE_DATA_FILE=/path/to/credentials_data DB_TYPE_FILE=/path/to/db_type DB_POSTGRESDB_DATABASE_FILE=/path/to/database_name DB_POSTGRESDB_HOST_FILE=/path/to/database_host DB_POSTGRESDB_PORT_FILE=/path/to/database_port DB_POSTGRESDB_USER_FILE=/path/to/database_user DB_POSTGRESDB_PASSWORD_FILE=/path/to/database_password DB_POSTGRESDB_SCHEMA_FILE=/path/to/database_schema DB_POSTGRESDB_SSL_CA_FILE=/path/to/ssl_ca DB_POSTGRESDB_SSL_CERT_FILE=/path/to/ssl_cert DB_POSTGRESDB_SSL_KEY_FILE=/path/to/ssl_key DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED_FILE=/path/to/ssl_reject_unauth ``` # Credential overwrites Credential overwrites let you set credential data globally. This data isn't visible to users, but n8n uses it automatically in the background - for example, to enable OAuth login using a "Connect" button without exposing client secrets. In the Editor UI, n8n hides all overwritten fields by default, so users can authenticate with OAuth using the "Connect" button on the credential. For the environment variables used to configure credential overwrites, refer to [Credentials environment variables](../environment-variables/credentials/). ## Using environment variables Set `CREDENTIALS_OVERWRITE_DATA` to `{ CREDENTIAL_NAME: { PARAMETER: VALUE }}`. Warning This approach isn't recommended. Environment variables aren't protected in n8n, so the data can leak to users. ## Using the REST API The recommended approach is to load the data using a custom REST endpoint. 1. Set `CREDENTIALS_OVERWRITE_ENDPOINT` to the path where the endpoint should be available: ``` export CREDENTIALS_OVERWRITE_ENDPOINT=send-credentials ``` Optionally, set `CREDENTIALS_OVERWRITE_ENDPOINT_AUTH_TOKEN` to require a bearer token for accessing the endpoint. Note Without an auth token, the endpoint can only be called once for security reasons. 1. Prepare a JSON file with the credentials to overwrite. For example, `oauth-credentials.json` for Asana and GitHub: ``` { "asanaOAuth2Api": { "clientId": "", "clientSecret": "" }, "githubOAuth2Api": { "clientId": "", "clientSecret": "" } } ``` 1. Send the file to your n8n instance: ``` curl -H "Content-Type: application/json" --data @oauth-credentials.json http://localhost:5678/send-credentials ``` If `CREDENTIALS_OVERWRITE_ENDPOINT_AUTH_TOKEN` is set to `secure-token`: ``` curl -H "Content-Type: application/json" -H "Authorization: Bearer secure-token" --data @oauth-credentials.json http://localhost:5678/send-credentials ``` Note Credentials can extend other credentials. For example, `googleSheetsOAuth2Api` extends `googleOAuth2Api`. You can set parameters on the parent (`googleOAuth2Api`) and all child credentials will use them. ## Persistence To store credential overwrites in the database and propagate them to all workers in multi-instance or queue mode, enable: ``` export CREDENTIALS_OVERWRITE_PERSISTENCE=true ``` When enabled, n8n stores the encrypted overwrites in the `settings` table and broadcasts a `reload-overwrite-credentials` event so workers reload the latest values. When disabled, overwrites remain in memory on the process that loaded them and n8n doesn't propagate them to workers or preserve them across restarts. # External hooks External hooks let you run custom code whenever n8n performs a specific operation. Use them to log data, change data, or forbid an action by throwing an error. There are two types: - **Backend hooks**: run server-side, registered using the `EXTERNAL_HOOK_FILES` environment variable. - **Frontend hooks**: run in the browser, loaded with a script tag. For the environment variables used to register hooks, refer to [External hooks environment variables](../environment-variables/external-hooks/). ## Backend hooks ### Available hooks | Hook | Arguments | Description | | ------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- | | `credentials.create` | `[credentialData: ICredentialsDb]` | Called before new credentials get created. Use to restrict the number of credentials. | | `credentials.delete` | `[id: credentialId]` | Called before credentials get deleted. | | `credentials.update` | `[credentialData: ICredentialsDb]` | Called before n8n saves existing credentials. | | `frontend.settings` | `[frontendSettings: IN8nUISettings]` | Gets called on n8n startup. Allows you to, for example, overwrite frontend data like the displayed OAuth URL. | | `n8n.ready` | `[app: App]` | Called once n8n is ready. Use to, for example, register custom API endpoints. | | `n8n.stop` | | Called when an n8n process gets stopped. Allows you to save some process data. | | `oauth1.authenticate` | `[oAuthOptions: clientOAuth1.Options, oauthRequestData: {oauth_callback: string}]` | Called before an OAuth1 authentication. Use to overwrite an OAuth callback URL. | | `oauth2.callback` | `[oAuth2Parameters: {clientId: string, clientSecret: string \| undefined, accessTokenUri: string, authorizationUri: string, redirectUri: string, scopes: string[]}]` | Called in an OAuth2 callback. Use to overwrite an OAuth callback URL. | | `workflow.activate` | `[workflowData: IWorkflowDb]` | Called before a workflow gets activated. Use to restrict the number of active workflows. | | `workflow.afterCreate` | `[workflowId: string]` | Called after a workflow gets created. | | `workflow.afterDelete` | `[workflowId: string]` | Called after a workflow gets deleted. | | `workflow.afterUpdate` | `[workflowData: IWorkflowBase]` | Called after an existing workflow gets saved. | | `workflow.create` | `[workflowData: IWorkflowBase]` | Called before a workflow gets created. Use to restrict the number of saved workflows. | | `workflow.delete` | `[workflowId: string]` | Called before a workflow gets deleted. | | `workflow.postExecute` | `[run: IRun, workflowData: IWorkflowBase]` | Called after a workflow gets executed. | | `workflow.preExecute` | `[workflow: Workflow: mode: WorkflowExecuteMode]` | Called before a workflow gets executed. Allows you to count or limit the number of workflow executions. | | `workflow.update` | `[workflowData: IWorkflowBase]` | Called before an existing workflow gets saved. | | `workflow.afterArchive` | `[workflowId: string]` | Called after you archive a workflow. | | `workflow.afterUnarchive` | `[workflowId: string]` | Called after you restore a workflow from the archive. | ### Registering hooks Set hooks by registering a hook file that contains the hook functions. To register a hook, set the environment variable `EXTERNAL_HOOK_FILES`. You can set the variable to a single file: `EXTERNAL_HOOK_FILES=/data/hook.js` Or to contain multiple files separated by a colon: `EXTERNAL_HOOK_FILES=/data/hook1.js:/data/hook2.js` ### Hook files Hook files are regular JavaScript files that have the following format: ``` module.exports = { "frontend": { "settings": [ async function (settings) { settings.oauthCallbackUrls.oauth1 = 'https://n8n.example.com/oauth1/callback'; settings.oauthCallbackUrls.oauth2 = 'https://n8n.example.com/oauth2/callback'; } ] }, "workflow": { "activate": [ async function (workflowData) { const activeWorkflows = await this.dbCollections.Workflow.count({ active: true }); if (activeWorkflows > 1) { throw new Error( 'Active workflow limit reached.' ); } } ] } } ``` ### Hook functions A hook or a hook file can contain multiple hook functions, with all functions executed one after another. If the parameters of the hook function are objects, it's possible to change the data of that parameter to change the behavior of n8n. You can also access the database in any hook function using `this.dbCollections` (refer to the code sample in [Hook files](#hook-files) above). ## Frontend external hooks Like backend external hooks, it's possible to define external hooks in the frontend code that get executed by n8n whenever a user performs a specific operation. You can use them, for example, to log data and change data. ### Available hooks | Hook | Description | | ------------------------------------------ | --------------------------------------------------------------------------- | | `credentialsEdit.credentialTypeChanged` | Called when an existing credential's type changes. | | `credentials.create` | Called when someone creates a new credential. | | `credentialsList.dialogVisibleChanged` | | | `dataDisplay.nodeTypeChanged` | | | `dataDisplay.onDocumentationUrlClick` | Called when someone selects the help documentation link. | | `execution.open` | Called when an existing execution opens. | | `executionsList.openDialog` | Called when someone selects an execution from existing Workflow Executions. | | `expressionEdit.itemSelected` | | | `expressionEdit.dialogVisibleChanged` | | | `nodeCreateList.filteredNodeTypesComputed` | | | `nodeCreateList.nodeFilterChanged` | Called when someone makes any changes to the node panel filter. | | `nodeCreateList.selectedTypeChanged` | | | `nodeCreateList.mounted` | | | `nodeCreateList.destroyed` | | | `nodeSettings.credentialSelected` | | | `nodeSettings.valueChanged` | | | `nodeView.createNodeActiveChanged` | | | `nodeView.addNodeButton` | | | `nodeView.mount` | | | `pushConnection.executionFinished` | | | `showMessage.showError` | | | `runData.displayModeChanged` | | | `workflow.activeChange` | | | `workflow.activeChangeCurrent` | | | `workflow.afterUpdate` | Called when someone updates an existing workflow. | | `workflow.open` | | | `workflowRun.runError` | | | `workflowRun.runWorkflow` | Called when a workflow executes. | | `workflowSettings.dialogVisibleChanged` | | | `workflowSettings.saveSettings` | Called when someone saves the settings of a workflow. | ### Registering frontend hooks You can set hooks by loading the hooks script on the page. One way to do this is by creating a hooks file in the project and adding a script tag in your `editor-ui/public/index.html` file: ``` ``` ### Frontend hook files Frontend external hook files are regular JavaScript files which have the following format: ``` window.n8nExternalHooks = { nodeView: { mount: [ function (store, meta) { // do something }, ], createNodeActiveChanged: [ function (store, meta) { // do something }, function (store, meta) { // do something else }, ], addNodeButton: [ function (store, meta) { // do something }, ], }, }; ``` ### Frontend hook functions You can define multiple hook functions per hook. n8n calls each hook function with the following arguments: - `store`: The Vuex store object. You can use this to change or get data from the store. - `metadata`: The object that contains any data provided by the hook. To see what's passed, search for the hook in the `editor-ui` package. # Manage instance settings using environment variables You can manage a subset of instance settings from environment variables, instead of configuring them through the UI. This is useful when you provision n8n instances automatically, such as through an internal deployment pipeline. Each supported area has a dedicated environment variable named `_MANAGED_BY_ENV`. Set this variable to `true` to activate environment variable management for that area. n8n then applies the related environment variables and locks the matching UI controls. ## How it works When you set `_MANAGED_BY_ENV` to `true`: - n8n reapplies the settings from environment variables **on every startup**. - The matching UI controls become **read-only**. When `_MANAGED_BY_ENV` is `false` (the default), n8n ignores the related environment variables, even if you set them. Values persist when you turn off `*_MANAGED_BY_ENV` Setting `*_MANAGED_BY_ENV` back to `false` restores UI write access but keeps the values that were last applied. Edit them through the UI afterward if you want to change them. Unexpected read-only UI controls If a setting appears as read-only and you didn't expect it, check whether the matching `*_MANAGED_BY_ENV` variable is `true` in your environment. The supported areas and their activating variables: - Instance owner: `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` - SSO: `N8N_SSO_MANAGED_BY_ENV` - Security policy: `N8N_SECURITY_POLICY_MANAGED_BY_ENV` - Log streaming: `N8N_LOG_STREAMING_MANAGED_BY_ENV` - MCP: `N8N_MCP_MANAGED_BY_ENV` - Community packages: `N8N_COMMUNITY_PACKAGES_MANAGED_BY_ENV` Set `_MANAGED_BY_ENV` to activate the group The other environment variables for an area have no effect unless `_MANAGED_BY_ENV` is `true`. Set it to `true` to activate the group. ## Instance owner Available from n8n v2.17.0 Pre-provision the [instance owner](../user-management-self-hosted/) from environment variables instead of going through the in-app setup. `N8N_INSTANCE_OWNER_PASSWORD_HASH` must be a bcrypt hash This variable expects a pre-hashed bcrypt value. Setting a plaintext password breaks login. | Variable | Type | Default | Description | | ----------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the instance owner from environment variables. When `true`, n8n overwrites the instance owner details below on every startup, locks the UI control for that user, and rejects API writes. | | `N8N_INSTANCE_OWNER_EMAIL` | String | - | Email address for the instance owner. | | `N8N_INSTANCE_OWNER_FIRST_NAME` | String | - | First name for the instance owner. | | `N8N_INSTANCE_OWNER_LAST_NAME` | String | - | Last name for the instance owner. | | `N8N_INSTANCE_OWNER_PASSWORD_HASH` | String | - | Bcrypt hash of the instance owner's password. Setting a plaintext password breaks login. | ## SSO Available from n8n v2.18.0 Feature availability Single sign-on is available on Business and Enterprise plans. Configure [single sign-on](../../securing/set-up-sso/) from environment variables. ### Activation and shared settings | Variable | Type | Default | Description | | -------------------------------- | ---------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage SSO from environment variables. When `true`, n8n applies the SSO variables on every startup and locks the matching UI controls. | | `N8N_SSO_USER_ROLE_PROVISIONING` | Enum string: `disabled`, `instance_role`, `instance_and_project_roles` | `disabled` | How n8n provisions roles for users who sign in through SSO. `disabled` doesn't provision any roles. `instance_role` provisions the instance-level role only. `instance_and_project_roles` provisions both instance and project roles. | ### OIDC | Variable | Type | Default | Description | | --------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_OIDC_LOGIN_ENABLED` | Boolean | `false` | Whether to enable OIDC login. | | `N8N_SSO_OIDC_CLIENT_ID` | String | - | OIDC client ID issued by your identity provider. | | `N8N_SSO_OIDC_CLIENT_SECRET` | String | - | OIDC client secret issued by your identity provider. | | `N8N_SSO_OIDC_DISCOVERY_ENDPOINT` | String | - | OIDC discovery endpoint URL (the `.well-known/openid-configuration` URL for your identity provider). | | `N8N_SSO_OIDC_PROMPT` | String | - | Optional OIDC `prompt` parameter to send with the authorization request, for example `login` or `consent`. | | `N8N_SSO_OIDC_ACR_VALUES` | String | - | Optional OIDC `acr_values` parameter. Use this to request a specific authentication context, for example a step-up MFA flow. | ### SAML SAML metadata variables are mutually exclusive Set either `N8N_SSO_SAML_METADATA` (inline XML) or `N8N_SSO_SAML_METADATA_URL` (URL), not both. | Variable | Type | Default | Description | | ---------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_SAML_LOGIN_ENABLED` | Boolean | `false` | Whether to enable SAML login. | | `N8N_SSO_SAML_METADATA` | String | - | SAML identity provider metadata as an XML string. Mutually exclusive with `N8N_SSO_SAML_METADATA_URL`; don't set both. | | `N8N_SSO_SAML_METADATA_URL` | String | - | URL to fetch SAML identity provider metadata from. Mutually exclusive with `N8N_SSO_SAML_METADATA`; don't set both. | ## Security policy Available from n8n v2.18.0 Manage the instance security policy from environment variables, including MFA enforcement and personal space restrictions. | Variable | Type | Default | Description | | --------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SECURITY_POLICY_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the security policy from environment variables. When `true`, n8n applies the security policy variables on every startup and locks the matching UI controls. | | `N8N_MFA_ENFORCED_ENABLED` | Boolean | `false` | Whether to enforce two-factor authentication for all users (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_PUBLISHING_ENABLED` | Boolean | `true` | Whether users can publish from their personal space (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_SHARING_ENABLED` | Boolean | `true` | Whether users can share resources from their personal space (`true`) or not (`false`). | ## Log streaming Available from n8n v2.19.0 Manage [log streaming](../../../log-streaming/) destinations from environment variables. See [Configure using environment variables](../../../log-streaming/#configure-using-environment-variables) for the per-destination JSON shape. | Variable | Type | Default | Description | | ---------------------------------- | ----------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_LOG_STREAMING_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage log streaming from environment variables. When `true`, n8n applies the log streaming variables on every startup and locks the matching UI controls. | | `N8N_LOG_STREAMING_DESTINATIONS` | JSON string | - | JSON array of log streaming destinations. Each destination is an object with a `type` of `webhook`, `syslog`, or `sentry`, plus the configuration for that type. | ## MCP Available from n8n v2.20.0 Manage [instance-level MCP access](../../../advanced-ai/mcp/accessing-n8n-mcp-server/) from environment variables. | Variable | Type | Default | Description | | ------------------------ | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_MCP_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage MCP settings from environment variables. When `true`, n8n applies the MCP variables on every startup and locks the matching UI controls. | | `N8N_MCP_ACCESS_ENABLED` | Boolean | `false` | Whether to enable instance-level MCP access (`true`) or not (`false`). | ## Community packages Available from n8n v2.21.0 Manage the set of installed [community packages](../../../integrations/community-nodes/installation/) from environment variables. n8n reconciles the installed packages against the list on every startup. Managed packages can't be uninstalled or updated through the UI. `N8N_COMMUNITY_PACKAGES_ENABLED` must also be set to `true` (the default). When community packages are disabled, n8n ignores `N8N_COMMUNITY_PACKAGES_MANAGED_BY_ENV` and logs a warning. | Variable | Type | Default | Description | | --------------------------------------- | ----------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_COMMUNITY_PACKAGES_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage installed community packages from environment variables. When `true`, n8n reconciles the installed packages against `N8N_COMMUNITY_PACKAGES` on every startup, installing missing packages, correcting versions, and **uninstalling any currently-installed packages that aren't in the list**. The **Community nodes** settings page becomes read-only: you can't install, update, or uninstall packages from the UI while this is enabled. | | `N8N_COMMUNITY_PACKAGES` | JSON string | - | JSON array of community packages to install. Each entry is an object with a `name` (required) and optional `version` and `checksum` fields. You can also embed the version in the name as `@`. See [environment variable installation](../../../integrations/community-nodes/installation/env-install/) for the full per-field reference. | ## Combined example The following example configures an instance with all six areas managed by environment variables. It creates the instance owner, configures OIDC SSO, enforces MFA, registers a webhook log streaming destination, enables MCP access, and manages a community package. ``` # Instance owner export N8N_INSTANCE_OWNER_MANAGED_BY_ENV=true export N8N_INSTANCE_OWNER_EMAIL= export N8N_INSTANCE_OWNER_FIRST_NAME= export N8N_INSTANCE_OWNER_LAST_NAME= export N8N_INSTANCE_OWNER_PASSWORD_HASH= # SSO using OIDC export N8N_SSO_MANAGED_BY_ENV=true export N8N_SSO_USER_ROLE_PROVISIONING=instance_role export N8N_SSO_OIDC_LOGIN_ENABLED=true export N8N_SSO_OIDC_CLIENT_ID= export N8N_SSO_OIDC_CLIENT_SECRET= export N8N_SSO_OIDC_DISCOVERY_ENDPOINT= # Security policy export N8N_SECURITY_POLICY_MANAGED_BY_ENV=true export N8N_MFA_ENFORCED_ENABLED=true export N8N_PERSONAL_SPACE_PUBLISHING_ENABLED=false export N8N_PERSONAL_SPACE_SHARING_ENABLED=false # Log streaming export N8N_LOG_STREAMING_MANAGED_BY_ENV=true export N8N_LOG_STREAMING_DESTINATIONS='[{"type":"webhook","url":"https://logs.example.com/n8n"}]' # MCP export N8N_MCP_MANAGED_BY_ENV=true export N8N_MCP_ACCESS_ENABLED=true # Community packages export N8N_COMMUNITY_PACKAGES_MANAGED_BY_ENV=true export N8N_COMMUNITY_PACKAGES='[{"name":"n8n-nodes-foo","version":"1.2.3"}]' ``` ## Set environment variables For the supported ways to set environment variables, see [Configuration methods](../configuration-methods/). # Supported databases By default, n8n uses SQLite to save credentials, past executions, and workflows. n8n also supports PostgresDB (only [actively maintained versions](https://www.postgresql.org/support/versioning/)). ## Database type by n8n installation The database type used varies depending on your n8n installation: ### Self-hosted n8n By default, self-hosted installations use **SQLite**. You can optionally configure PostgreSQL by setting the appropriate environment variables (see [PostgresDB configuration](#postgresdb)). ### n8n Cloud n8n Cloud installations use different databases depending on your plan tier: - **SQLite**: Trial, Starter, and Pro plans, as well as legacy Enterprise plans - **PostgreSQL**: Enterprise Scaling plans only ## Shared settings The following environment variables get used by all databases: - `DB_TABLE_PREFIX` (default: -) - Prefix for table names ## PostgresDB To use PostgresDB as the database, you can provide the following environment variables: - `DB_TYPE=postgresdb` - `DB_POSTGRESDB_DATABASE` (default: 'n8n') - `DB_POSTGRESDB_HOST` (default: 'localhost') - `DB_POSTGRESDB_PORT` (default: 5432) - `DB_POSTGRESDB_USER` (default: 'postgres') - `DB_POSTGRESDB_PASSWORD` (default: empty) - `DB_POSTGRESDB_SCHEMA` (default: 'public') - `DB_POSTGRESDB_SSL_CA` (default: undefined): Path to the server's CA certificate used to validate the connection (opportunistic encryption isn't supported) - `DB_POSTGRESDB_SSL_CERT` (default: undefined): Path to the client's TLS certificate - `DB_POSTGRESDB_SSL_KEY` (default: undefined): Path to the client's private key corresponding to the certificate - `DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED` (default: true): If TLS connections that fail validation should be rejected ``` export DB_TYPE=postgresdb export DB_POSTGRESDB_DATABASE=n8n export DB_POSTGRESDB_HOST=postgresdb export DB_POSTGRESDB_PORT=5432 export DB_POSTGRESDB_USER=n8n export DB_POSTGRESDB_PASSWORD=n8n export DB_POSTGRESDB_SCHEMA=n8n # optional: export DB_POSTGRESDB_SSL_CA_FILE=$(pwd)/ca.crt export DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED=false n8n start ``` ### Required permissions n8n needs to create and modify the schemas of the tables it uses. Recommended permissions: ``` CREATE DATABASE n8n-db; CREATE USER n8n-user WITH PASSWORD 'random-password'; GRANT ALL PRIVILEGES ON DATABASE n8n-db TO n8n-user; ``` ### TLS You can choose between these configurations: - Not declaring (default): Connect with `SSL=off` - Declaring only the CA and unauthorized flag: Connect with `SSL=on` and verify the server's signature - Declaring `_{CERT,KEY}` and the above: Use the certificate and key for client TLS authentication ## SQLite This is the default database that gets used if nothing is defined. The database file is located at: `~/.n8n/database.sqlite` # Task runners Task runners are a generic mechanism to execute tasks in a secure and performant way. They're used to execute user-provided JavaScript and Python code in the [Code node](../../../integrations/builtin/core-nodes/n8n-nodes-base.code/). This document describes how task runners work and how you can configure them. Internal mode not recommended for production Using internal mode in production environments can pose a security risk. For production deployments, use [external mode](#external-mode) to ensure proper isolation between n8n and task runner processes. Refer to [Hardening task runners](../../securing/hardening-task-runners/) for additional security measures. ## How it works The task runner feature consists of these components: one or more task runners, a task broker, and a task requester. Task runners connect to the task broker using a websocket connection. A task requester submits a task request to the broker where an available task runner can pick it up for execution. The runner executes the task and submits the results to the task requester. The task broker coordinates communication between the runner and the requester. The n8n instance (main and worker) acts as the broker. The Code node in this case is the task requester. ## Task runner modes You can use task runners in two different modes: internal and external. ### Internal mode In internal mode, the n8n instance launches the task runner as a child process. The n8n process monitors and manages the life cycle of the task runner. The task runner process shares the same `uid` and `gid` as n8n. This is **not** recommended for production. ### External mode In external mode, a [launcher application](https://github.com/n8n-io/task-runner-launcher) launches task runners on demand and manages their lifecycle. Typically, this means that next to n8n you add a sidecar container running the [`n8nio/runners`](https://hub.docker.com/r/n8nio/runners) image containing the launcher, the JS task runner and the Python task runner. This sidecar container is independent from the n8n instance. When using [Queue mode](../../scaling/queue-mode/), each worker needs to have its own sidecar container for task runners. In addition, if [`OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS=false`](../environment-variables/queue-mode/#queue-mode-environment-variables), then your main instance will run manual executions and needs its own sidecar container for task runners as well. Please note that running n8n with offloading disabled isn't recommended for production. ## Setting up external mode In external mode, you run the `n8nio/runners` image as a sidecar container next to n8n. Below you will find a docker compose as a reference. Keep in mind that the `n8nio/runners` image version must match that of the `n8nio/n8n` image, and the n8n version must be >=1.111.0. ``` services: n8n: image: n8nio/n8n:1.111.0 container_name: n8n-main environment: - N8N_RUNNERS_ENABLED=true - N8N_RUNNERS_MODE=external - N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0 - N8N_RUNNERS_AUTH_TOKEN=your-secret-here - N8N_NATIVE_PYTHON_RUNNER=true ports: - "5678:5678" volumes: - n8n_data:/home/node/.n8n # etc. task-runners: image: n8nio/runners:1.111.0 container_name: n8n-runners environment: - N8N_RUNNERS_TASK_BROKER_URI=http://n8n-main:5679 - N8N_RUNNERS_AUTH_TOKEN=your-secret-here # etc. depends_on: - n8n volumes: n8n_data: ``` There are three layers of configuration: the n8n container, the runners container, and the launcher inside the runners container. ### Configuring n8n container in external mode These are the main environment variables that you can set on the n8n container running in external mode: | Environment variables | Description | | ------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_RUNNERS_ENABLED=true` | Enables task runners. | | `N8N_RUNNERS_MODE=external` | Use task runners in external mode. | | `N8N_RUNNERS_AUTH_TOKEN=` | A shared secret task runners use to connect to the broker. | | `N8N_RUNNERS_BROKER_LISTEN_ADDRESS=0.0.0.0` | By default, the task broker only listens to localhost. When using multiple containers (for example, with Docker Compose), it needs to be able to accept external connections. | For full list of environment variables see [task runner environment variables](../environment-variables/task-runners/). ### Configuring runners container in external mode These are the main environment variables that you can set on the runners container running in external mode: | Environment variables | Description | | ------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_RUNNERS_AUTH_TOKEN=` | The shared secret the task runner uses to connect to the broker. | | `N8N_RUNNERS_TASK_BROKER_URI=localhost:5679` | The address of the task broker server within the n8n instance. | | `N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT=15` | Number of seconds of inactivity to wait before shutting down the task runner process. The launcher will automatically start the runner again when there are new tasks to execute. Set to `0` to disable automatic shutdown. | For full list of environment variables see [task runner environment variables](../environment-variables/task-runners/). ### Configuring launcher in runners container in external mode The launcher reads environment variables from runners container environment, and performs the following actions: - Passing environment variables from the launcher's own environment to all runners (`allowed-env`) - Setting specific environment variables on specific runners (`env-overrides`) Which environment variables to pass and to set are defined in the [launcher config file](https://github.com/n8n-io/n8n/blob/master/docker/images/runners/n8n-task-runners.json) included in the runners image. This config file is located in the container at `/etc/task-runners.json`. To learn more about the launcher config file, refer to the [Config file documentation](https://github.com/n8n-io/task-runner-launcher/blob/main/docs/setup.md#config-file). The default launcher configuration file is locked down, but you can edit this file, for example, to allowlist first- or third-party modules. To customize the launcher configuration file, mount to this path: ``` path/to/n8n-task-runners.json:/etc/n8n-task-runners.json ``` ## Adding extra dependencies ### 1. Extend the `n8nio/runners` image You can extend the `n8nio/runners` image to add extra dependencies to the runners. You'll need `n8nio/runners:1.121.0` or later to do this. ``` FROM n8nio/runners:1.121.0 USER root RUN cd /opt/runners/task-runner-javascript && pnpm add moment uuid RUN cd /opt/runners/task-runner-python && uv pip install numpy pandas COPY n8n-task-runners.json /etc/n8n-task-runners.json USER runner ``` You must also allowlist any first-party or third-party packages for use by the Code node. Do this by editing the configuration file `n8n-task-runners.json` to include the packages in your extended image. ``` { "task-runners": [ { "runner-type": "javascript", "env-overrides": { "NODE_FUNCTION_ALLOW_BUILTIN": "crypto", // <-- allowlist Node.js builtin modules here "NODE_FUNCTION_ALLOW_EXTERNAL": "moment,uuid", // <-- allowlist third-party JS packages here } }, { "runner-type": "python", "env-overrides": { "PYTHONPATH": "/opt/runners/task-runner-python", "N8N_RUNNERS_STDLIB_ALLOW": "json", // <-- allowlist Python standard library packages here "N8N_RUNNERS_EXTERNAL_ALLOW": "numpy,pandas" // <-- allowlist third-party Python packages here } } ] } ``` - `NODE_FUNCTION_ALLOW_BUILTIN`: comma-separated list of allowed node builtin modules. - `NODE_FUNCTION_ALLOW_EXTERNAL`: comma-separated list of allowed JS packages. - `N8N_RUNNERS_STDLIB_ALLOW`: comma-separated list of allowed Python standard library packages. - `N8N_RUNNERS_EXTERNAL_ALLOW`: comma-separated list of allowed Python packages. ### 2. Build your custom image For example, from the n8n repository root: ``` docker buildx build \ -f docker/images/runners/Dockerfile \ -t n8nio/runners:custom \ . ``` ### 3. Run the image For example: ``` docker run --rm -it \ -e N8N_RUNNERS_AUTH_TOKEN=test \ -e N8N_RUNNERS_LAUNCHER_LOG_LEVEL=debug \ -e N8N_RUNNERS_TASK_BROKER_URI=http://host.docker.internal:5679 \ -p 5680:5680 \ n8nio/runners:custom ``` # Configure self-hosted n8n for user management User management in n8n allows you to invite people to work in your n8n instance. This document describes how to configure your n8n instance to support user management, and the steps to start inviting users. Refer to the main [User management](../../../user-management/) guide for more information about usage, including: - [Managing users](../../../user-management/manage-users/) - [Account types](../../../user-management/account-types/) - [Best practices](../../../user-management/best-practices/) For LDAP setup information, refer to [LDAP](../../../user-management/ldap/). For SAML setup information, refer to [SAML](../../../user-management/saml/). Unsupported user management methods In version 1.0, n8n: - Removed support for **basic auth** and **JWT** - Removed the `N8N_USER_MANAGEMENT_DISABLED` environment variable. No supported way to disable the login screen exists in recent versions of n8n, including for local or development use. If you need to simplify login for local development, consider using a password manager, setting a simple local password, or scripting the standard login flow. ## Setup There are three stages to set up user management in n8n: 1. Configure your n8n instance to use your SMTP server. 1. Start n8n and follow the setup steps in the app. 1. Invite users. ### Step one: SMTP n8n recommends setting up an SMTP server, for user invites and password resets. Optional from 0.210.1 From version 0.210.1 onward, this step is optional. You can choose to manually copy and send invite links instead of setting up SMTP. Note that if you skip this step, users can't reset passwords. Get the following information from your SMTP provider: - Server name - SMTP username - SMTP password - SMTP sender name To set up SMTP with n8n, configure the SMTP environment variables for your n8n instance. For information on how to set environment variables, refer to [Configuration](../configuration-methods/) | Variable | Type | Description | Required? | | ------------------------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- | --------- | | `N8N_EMAIL_MODE` | string | `smtp` | Required | | `N8N_SMTP_HOST` | string | *your_SMTP_server_name* | Required | | `N8N_SMTP_PORT` | number | *your_SMTP_server_port* Default is `465`. | Optional | | `N8N_SMTP_USER` | string | *your_SMTP_username* | Optional | | `N8N_SMTP_PASS` | string | *your_SMTP_password* | Optional | | `N8N_SMTP_OAUTH_SERVICE_CLIENT` | string | *your_OAuth_service_client* | Optional | | `N8N_SMTP_OAUTH_PRIVATE_KEY` | string | *your_OAuth_private_key* | Optional | | `N8N_SMTP_SENDER` | string | Sender email address. You can optionally include the sender name. Example with name: *n8n ``* | Required | | `N8N_SMTP_SSL` | boolean | Whether to use SSL for SMTP (true) or not (false). Defaults to `true`. | Optional | | `N8N_UM_EMAIL_TEMPLATES_INVITE` | string | Full path to your HTML email template. This overrides the default template for invite emails. | Optional | | `N8N_UM_EMAIL_TEMPLATES_PWRESET` | string | Full path to your HTML email template. This overrides the default template for password reset emails. | Optional | | `N8N_UM_EMAIL_TEMPLATES_WORKFLOW_SHARED` | String | Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template. | Optional | | `N8N_UM_EMAIL_TEMPLATES_CREDENTIALS_SHARED` | String | Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template. | Optional | | `N8N_UM_EMAIL_TEMPLATES_PROJECT_SHARED` | String | Overrides the default HTML template for notifying users that a project was shared. Provide the full path to the template. | Optional | If your n8n instance is already running, you need to restart it to enable the new SMTP settings. More configuration options There are more configuration options available as environment variables. Refer to [Environment variables](../environment-variables/) for a list. These include options to disable tags, workflow templates, and the personalization survey, if you don't want your users to see them. New to SMTP? If you're not familiar with SMTP, this [blog post by SendGrid](https://sendgrid.com/blog/what-is-an-smtp-server/) offers a short introduction, while [Wikipedia's Simple Mail Transfer Protocol article](https://en.wikipedia.org/wiki/Simple_Mail_Transfer_Protocol) provides more detailed technical background. ### Step two: In-app setup When you set up user management for the first time, you create an owner account. 1. Open n8n. The app displays a signup screen. 1. Enter your details. Your password must be at least eight characters, including at least one number and one capital letter. 1. Click **Next**. n8n logs you in with your new owner account. #### Pre-provision the instance owner from environment variables Available from n8n v2.17.0 You can pre-provision the instance owner from environment variables instead of going through the in-app setup. Set `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` to `true` and provide the owner details. See [Manage instance settings using environment variables](../settings-env-vars/) for how the activation pattern works. `N8N_INSTANCE_OWNER_PASSWORD_HASH` must be a bcrypt hash This variable expects a pre-hashed bcrypt value. Setting a plaintext password breaks login. | Variable | Type | Default | Description | | ----------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the instance owner from environment variables. When `true`, n8n overwrites the instance owner details below on every startup, locks the UI control for that user, and rejects API writes. | | `N8N_INSTANCE_OWNER_EMAIL` | String | - | Email address for the instance owner. | | `N8N_INSTANCE_OWNER_FIRST_NAME` | String | - | First name for the instance owner. | | `N8N_INSTANCE_OWNER_LAST_NAME` | String | - | Last name for the instance owner. | | `N8N_INSTANCE_OWNER_PASSWORD_HASH` | String | - | Bcrypt hash of the instance owner's password. Setting a plaintext password breaks login. | ### Step three: Invite users You can now invite other people to your n8n instance. 1. Sign into your workspace with your owner account. (If you are in the Admin Panel open your **Workspace** from the Dashboard) 1. Click the three dots next to your user icon at the bottom left and click **Settings**. n8n opens your **Personal settings** page. 1. Click **Users** to go to the **Users** page. 1. Click **Invite**. 1. Enter the new user's email address. 1. Click **Invite user**. n8n sends an email with a link for the new user to join. # Configuration examples This section contains examples for how to configure n8n to solve particular use cases. - [Isolate n8n](/hosting/configuration/configuration-examples/isolation/) - [Configure the Base URL](/hosting/configuration/configuration-examples/base-url/) - [Configure custom SSL certificate authorities](/hosting/configuration/configuration-examples/custom-certificate-authority/) - [Set a custom encryption key](/hosting/configuration/configuration-examples/encryption-key/) - [Configure workflow timeouts](/hosting/configuration/configuration-examples/execution-timeout/) - [Specify custom nodes location](/hosting/configuration/configuration-examples/custom-nodes-location/) - [Enable modules in Code node](/hosting/configuration/configuration-examples/modules-in-code-node/) - [Set the timezone](/hosting/configuration/configuration-examples/time-zone/) - [Specify user folder path](/hosting/configuration/configuration-examples/user-folder/) - [Configure webhook URLs with reverse proxy](/hosting/configuration/configuration-examples/webhook-url/) - [Enable Prometheus metrics](/hosting/configuration/configuration-examples/prometheus/) - [Pre-configure Microsoft OAuth credentials](/hosting/configuration/configuration-examples/microsoft-oauth-credential-overwrites/) - [Configure a custom workflow templates library](/hosting/configuration/configuration-examples/custom-templates/) # Configure the Base URL for n8n's front end access Requires manual UI build This use case involves configuring the `VUE_APP_URL_BASE_API` environmental variable which requires a manual build of the `n8n-editor-ui` package. You can't use it with the default n8n Docker image where the default setting for this variable is `/`, meaning that it uses the root-domain. You can configure the Base URL that the front end uses to connect to the back end's REST API. This is relevant when you want to host n8n's front end and back end separately. ``` export VUE_APP_URL_BASE_API=https://n8n.example.com/ ``` Refer to [Environment variables reference](../../environment-variables/deployment/) for more information on this variable. # Configure n8n to use your own certificate authority or self-signed certificate You can add your own certificate authority (CA) or self-signed certificate to n8n. This means you are able to trust a certain SSL certificate instead of trusting all invalid certificates, which is a potential security risk. Added in version 1.42.0 This feature is available in version 1.42.0 and above. To use this feature you need to place your certificates in a folder and mount the folder to `/opt/custom-certificates` in the container. The external path that you map to `/opt/custom-certificates` must be writable by the container. ## Docker The examples below assume you have a folder called `pki` that contains your certificates in either the directory you run the command from or next to your docker compose file. ### Docker CLI When using the CLI you can use the `-v` flag from the command line: ``` docker run -it --rm \ --name n8n \ -p 5678:5678 \ -v ./pki:/opt/custom-certificates \ docker.n8n.io/n8nio/n8n ``` ### Docker Compose ``` name: n8n services: n8n: volumes: - ./pki:/opt/custom-certificates container_name: n8n ports: - 5678:5678 image: docker.n8n.io/n8nio/n8n ``` You should also give the right permissions to the imported certs. You can do this once the container is running (assuming n8n as the container name): ``` docker exec --user 0 n8n chown -R 1000:1000 /opt/custom-certificates ``` ## Certificate requirements for Custom Trust Store Supported certificate types: - Root CA Certificates: these are certificates from Certificate Authorities that sign other certificates. Trust these to accept all certificates signed by that CA. - Self-Signed Certificates: certificates that servers create and sign themselves. Trust these to accept connections to that specific server only. You must use PEM format: - Text-based format with BEGIN/END markers - Supported file extensions: `.pem`, `.crt`, `.cer` - Contains the public certificate (no private key needed) For example: ``` -----BEGIN CERTIFICATE----- MIIDXTCCAkWgAwIBAgIJAKoK/heBjcOuMA0GCSqGSIb3DQEBBQUAMEUxCzAJBgNV [base64 encoded data] -----END CERTIFICATE----- ``` The system doesn't accept: - DER/binary format files - PKCS#7 (.p7b) files - PKCS#12 (.pfx, .p12) files - Private key files - Convert these formats to PEM before use. # Specify location for your custom nodes Every user can add custom nodes that get loaded by n8n on startup. The default location is in the subfolder `.n8n/custom` of the user who started n8n. You can define more folders with an environment variable: ``` export N8N_CUSTOM_EXTENSIONS="/home/jim/n8n/custom-nodes;/data/n8n/nodes" ``` Refer to [Environment variables reference](../../environment-variables/nodes/) for more information on this variable. # Configure a custom workflow templates library n8n provides a library of workflow [templates](../../../../glossary/#template-n8n). When self-hosting n8n, you can: - Continue to use n8n's workflow templates library (this is the default behavior) - Disable workflow templates - Create your own workflow templates library ## Disable workflow templates In your environment variables, set `N8N_TEMPLATES_ENABLED` to false. ## Use your own workflow templates library In your environment variables, set `N8N_TEMPLATES_HOST` to the base URL of your API. ### Endpoints Your API must provide the same endpoints and data structure as n8n's. The endpoints are: | Method | Path | Purpose | | ------ | ----------------------------- | -------------------------------------------- | | GET | `/templates/workflows/` | Fetch template metadata for preview/browsing | | GET | `/workflows/templates/` | Fetch workflow data to import onto canvas | | GET | `/templates/search` | Search for workflow templates | | GET | `/templates/collections/` | Get a specific template collection | | GET | `/templates/collections` | List all template collections | | GET | `/templates/categories` | List all template categories | | GET | `/health` | Health check endpoint | Critical: Two different response formats required The two workflow endpoints require **different response formats**: - **`/templates/workflows/{id}`**: Returns the template itself, which includes the workflow in the `workflow` key - **`/workflows/templates/{id}`**: Returns the workflow the template contains See Schemas below for details. ### Query parameters The `/templates/search` endpoint accepts the following query parameters: | Parameter | Type | Description | | ---------- | -------------------------------------------- | ------------------------------------------------ | | `page` | integer | The page of results to return | | `rows` | integer | The maximum number of results to return per page | | `category` | comma-separated list of strings (categories) | The categories to search within | | `search` | string | The search query | The `/templates/collections` endpoint accepts the following query parameters: | Parameter | Type | Description | | ---------- | -------------------------------------------- | ------------------------------- | | `category` | comma-separated list of strings (categories) | The categories to search within | | `search` | string | The search query | ### Schemas The key difference between the two workflow endpoints: ``` // GET /templates/workflows/{id} returns (wrapped): { "workflow": { "id": 123, "name": "...", "totalViews": 1000, // ... see full workflow item schema below "workflow": { // actual workflow definition "nodes": [...], "connections": {} } } } // GET /workflows/templates/{id} returns (flat): { "id": 123, "name": "...", "workflow": { // actual workflow definition "nodes": [...], "connections": {} } } ``` Detailed schemas for response objects: Show `workflow` item data schema Used by `/templates/workflows/{id}` endpoint (wrapped in a `workflow` key). This schema describes the template metadata used for displaying templates in search/browse UI. It includes a nested `workflow` property that contains the actual importable workflow definition. ``` { "$schema": "http://json-schema.org/draft-07/schema#", "title": "Generated schema for Root", "type": "object", "properties": { "id": { "type": "number" }, "name": { "type": "string" }, "totalViews": { "type": "number" }, "price": {}, "purchaseUrl": {}, "recentViews": { "type": "number" }, "createdAt": { "type": "string" }, "user": { "type": "object", "properties": { "username": { "type": "string" }, "verified": { "type": "boolean" } }, "required": [ "username", "verified" ] }, "nodes": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "number" }, "icon": { "type": "string" }, "name": { "type": "string" }, "codex": { "type": "object", "properties": { "data": { "type": "object", "properties": { "details": { "type": "string" }, "resources": { "type": "object", "properties": { "generic": { "type": "array", "items": { "type": "object", "properties": { "url": { "type": "string" }, "icon": { "type": "string" }, "label": { "type": "string" } }, "required": [ "url", "label" ] } }, "primaryDocumentation": { "type": "array", "items": { "type": "object", "properties": { "url": { "type": "string" } }, "required": [ "url" ] } } }, "required": [ "primaryDocumentation" ] }, "categories": { "type": "array", "items": { "type": "string" } }, "nodeVersion": { "type": "string" }, "codexVersion": { "type": "string" } }, "required": [ "categories" ] } } }, "group": { "type": "string" }, "defaults": { "type": "object", "properties": { "name": { "type": "string" }, "color": { "type": "string" } }, "required": [ "name" ] }, "iconData": { "type": "object", "properties": { "icon": { "type": "string" }, "type": { "type": "string" }, "fileBuffer": { "type": "string" } }, "required": [ "type" ] }, "displayName": { "type": "string" }, "typeVersion": { "type": "number" }, "nodeCategories": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "number" }, "name": { "type": "string" } }, "required": [ "id", "name" ] } } }, "required": [ "id", "icon", "name", "codex", "group", "defaults", "iconData", "displayName", "typeVersion" ] } }, "description": { "type": "string" }, "image": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "number" }, "url": { "type": "string" } } } }, "categories": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "number" }, "name": { "type": "string" } } } }, "workflowInfo": { "type": "object", "properties": { "nodeCount": { "type": "number" }, "nodeTypes": { "type": "object" } } }, "workflow": { "type": "object", "properties": { "nodes": { "type": "array" }, "connections": { "type": "object" }, "settings": { "type": "object" }, "pinData": { "type": "object" } }, "required": [ "nodes", "connections" ] } }, "required": [ "id", "name", "totalViews", "createdAt", "user", "nodes", "workflow" ] } ``` Show `category` item data schema ``` { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "id": { "type": "number" }, "name": { "type": "string" } }, "required": [ "id", "name" ] } ``` Show `collection` item data schema ``` { "$schema": "http://json-schema.org/draft-07/schema#", "type": "object", "properties": { "id": { "type": "number" }, "rank": { "type": "number" }, "name": { "type": "string" }, "totalViews": {}, "createdAt": { "type": "string" }, "workflows": { "type": "array", "items": { "type": "object", "properties": { "id": { "type": "number" } }, "required": [ "id" ] } }, "nodes": { "type": "array", "items": {} } }, "required": [ "id", "rank", "name", "totalViews", "createdAt", "workflows", "nodes" ] } ``` You can also interactively explore n8n's API endpoints: You can [contact us](mailto:help@n8n.io) for more support. ## Add your workflows to the n8n library You can submit your workflows to n8n's template library. n8n is working on a creator program, and developing a marketplace of templates. This is an ongoing project, and details are likely to change. Refer to [n8n Creator hub](https://www.notion.so/n8n/n8n-Creator-hub-7bd2cbe0fce0449198ecb23ff4a2f76f) for information on how to submit templates and become a creator. # Set a custom encryption key n8n creates a random encryption key automatically on the first launch and saves it in the `~/.n8n` folder. n8n uses that key to encrypt the credentials before they get saved to the database. If the key isn't yet in the settings file, you can set it using an environment variable, so that n8n uses your custom key instead of generating a new one. In [queue mode](../../../scaling/queue-mode/), you must specify the encryption key environment variable for all workers. ``` export N8N_ENCRYPTION_KEY= ``` Refer to [Environment variables reference](../../environment-variables/deployment/) for more information on this variable. # Configure workflow timeout settings A workflow times out and gets canceled after this time (in seconds). If the workflow runs in the main process, a soft timeout happens (takes effect after the current node finishes). If a workflow runs in its own process, n8n attempts a soft timeout first, then kills the process after waiting for a fifth of the given timeout duration. `EXECUTIONS_TIMEOUT` default is `-1`. For example, if you want to set the timeout to one hour: ``` export EXECUTIONS_TIMEOUT=3600 ``` You can also set maximum execution time (in seconds) for each workflow individually. For example, if you want to set maximum execution time to two hours: ``` export EXECUTIONS_TIMEOUT_MAX=7200 ``` Refer to [Environment variables reference](../../environment-variables/executions/) for more information on these variables. # Isolate n8n By default, a self-hosted n8n instance sends data to n8n's servers. It notifies users about available updates, workflow templates, and diagnostics. To prevent your n8n instance from connecting to n8n's servers, set these environment variables to false: ``` N8N_DIAGNOSTICS_ENABLED=false N8N_VERSION_NOTIFICATIONS_ENABLED=false N8N_TEMPLATES_ENABLED=false ``` Unset n8n's diagnostics configuration: ``` EXTERNAL_FRONTEND_HOOKS_URLS= N8N_DIAGNOSTICS_CONFIG_FRONTEND= N8N_DIAGNOSTICS_CONFIG_BACKEND= ``` Refer to [Environment variables reference](../../environment-variables/deployment/) for more information on these variables. # Pre-configure Microsoft OAuth credentials After [setting up a Microsoft Entra ID app registration with delegated access](../../../../integrations/builtin/credentials/microsoftentra/#delegated-access-for-organisation-wide-microsoft-integrations), you can use [credential overwrites](../../credential-overwrites/) to inject the Client ID and Client Secret into n8n at startup. This means users in your organisation can connect to Microsoft services without completing their own OAuth app registration. n8n supports three environment variables for credential overwrites. This guide uses `CREDENTIALS_OVERWRITE_DATA_FILE`. Refer to [Credentials environment variables](../../environment-variables/credentials/) for the full variable reference. ## Create the credentials file On the host running n8n, create a file named `credentials-overwrite.json` in the same directory as your `docker-compose.yaml`. The file contains a JSON object keyed by the n8n credential type name. For example, to pre-configure Microsoft Outlook: ``` { "microsoftOutlookOAuth2Api": { "clientId": "YOUR_CLIENT_ID", "clientSecret": "YOUR_CLIENT_SECRET" } } ``` To pre-configure multiple Microsoft services at once, add each credential type as a separate key: ``` { "microsoftOutlookOAuth2Api": { "clientId": "YOUR_CLIENT_ID", "clientSecret": "YOUR_CLIENT_SECRET" }, "microsoftOneDriveOAuth2Api": { "clientId": "YOUR_CLIENT_ID", "clientSecret": "YOUR_CLIENT_SECRET" } } ``` Minified JSON n8n requires the JSON to be minified (no spaces or newlines). The examples above are formatted for readability. Make sure your actual file contains no extra whitespace: ``` {"microsoftOutlookOAuth2Api":{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}} ``` Refer to [Required scopes by integration](../../../../integrations/builtin/credentials/microsoftentra/#required-scopes-by-integration) for the credential type name of each Microsoft service. ## Docker Compose Mount the credentials file as a read-only volume and set the environment variable in your `compose.yaml`: ``` services: n8n: image: docker.n8n.io/n8nio/n8n:latest container_name: n8n restart: always ports: - "5678:5678" environment: - GENERIC_TIMEZONE=America/New_York - TZ=America/New_York - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true - N8N_LOG_LEVEL=debug - N8N_LOG_OUTPUT=file,console - N8N_LOG_FILE_COUNT_MAX=5 - CREDENTIALS_OVERWRITE_DATA_FILE=/run/secrets/credentials-overwrite.json volumes: - n8n_data:/home/node/.n8n - ./credentials-overwrite.json:/run/secrets/credentials-overwrite.json:ro networks: - default volumes: n8n_data: name: ${N8N_VOLUME:-n8n_data} external: true ``` Apply the changes by restarting the container: ``` docker compose up -d ``` ## Verify the overwrite is applied After n8n starts, have a user create a new credential for one of the pre-configured services (for example, Microsoft Outlook). They should see a **Managed OAuth2 (recommended)** option in the credential selection. The user can click **Connect to Microsoft Outlook**, with no auth required. An **Account connected** message should appear If the **Managed OAuth 2** option doesn't appear, the environment variable wasn't applied correctly. Check that the file path in the volume mount matches the value of `CREDENTIALS_OVERWRITE_DATA_FILE`. ## Kubernetes For Kubernetes deployments, replace the Docker volume mount with Kubernetes-native primitives. The approach differs by cloud provider. Choose the section that matches your environment. ### Plain Kubernetes Secret (EKS / AKS / GKE) This approach works across all three managed Kubernetes providers without additional dependencies. **1. Create the Secret:** ``` apiVersion: v1 kind: Secret metadata: name: n8n-credentials-overwrite namespace: your-namespace type: Opaque stringData: credentials-overwrite.json: '{"microsoftOutlookOAuth2Api":{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}}' ``` **2. Mount the Secret in your Deployment:** ``` spec: containers: - name: n8n image: docker.n8n.io/n8nio/n8n:latest env: - name: CREDENTIALS_OVERWRITE_DATA_FILE value: /run/secrets/credentials-overwrite.json # ...your other env vars volumeMounts: - name: credentials-overwrite mountPath: /run/secrets/credentials-overwrite.json subPath: credentials-overwrite.json readOnly: true volumes: - name: credentials-overwrite secret: secretName: n8n-credentials-overwrite ``` The `subPath` field is important. Without it, Kubernetes replaces the entire `/run/secrets/` directory rather than mounting just the single file. Alternative: inline environment variable To skip the volume mount entirely, reference the Secret directly as an environment variable: ``` env: - name: CREDENTIALS_OVERWRITE_DATA valueFrom: secretKeyRef: name: n8n-credentials-overwrite key: credentials-overwrite.json ``` ``` stringData: credentials-json: '{"microsoftOutlookOAuth2Api":{"clientId":"...","clientSecret":"..."}}' ``` This is cleaner for single-service setups, but note that some Kubernetes environments restrict environment variable size (for example, to 128KB per variable). The file-based approach is safer if you have many credential overwrites. ### AWS Secrets Manager (EKS) This approach uses the [AWS Secrets Store CSI Driver](https://docs.aws.amazon.com/secretsmanager/latest/userguide/integrating_csi_driver.html) to mount a secret from AWS Secrets Manager directly into the pod. It adds rotation support, CloudTrail audit logging, and centralised secret management. **Prerequisites:** - Secrets Store CSI Driver and ASCP (AWS Secrets and Configuration Provider) installed on the cluster - IAM OIDC provider configured for the cluster (required for IRSA) - An IAM role with `secretsmanager:GetSecretValue` and `secretsmanager:DescribeSecret` permissions **1. Create the secret in AWS Secrets Manager:** ``` aws secretsmanager create-secret \ --name n8n/credentials-overwrite \ --description "n8n credential overwrites for Microsoft OAuth" \ --secret-string '{"microsoftOutlookOAuth2Api":{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}}' ``` **2. Create an IAM policy:** ``` { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "secretsmanager:GetSecretValue", "secretsmanager:DescribeSecret" ], "Resource": "arn:aws:secretsmanager:REGION:ACCOUNT_ID:secret:n8n/credentials-overwrite-*" } ] } ``` ``` aws iam create-policy \ --policy-name n8n-credentials-overwrite-read \ --policy-document file://policy.json ``` **3. Create a service account with IRSA:** ``` eksctl create iamserviceaccount \ --name n8n-sa \ --namespace your-namespace \ --cluster your-cluster \ --attach-policy-arn arn:aws:iam::ACCOUNT_ID:policy/n8n-credentials-overwrite-read \ --approve ``` **4. Create the SecretProviderClass:** ``` apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: n8n-credentials-overwrite namespace: your-namespace spec: provider: aws parameters: objects: | - objectName: "n8n/credentials-overwrite" objectType: "secretsmanager" objectAlias: "credentials-overwrite.json" ``` **5. Update your n8n Deployment:** ``` apiVersion: apps/v1 kind: Deployment metadata: name: n8n namespace: your-namespace spec: template: spec: serviceAccountName: n8n-sa containers: - name: n8n image: docker.n8n.io/n8nio/n8n:latest env: - name: CREDENTIALS_OVERWRITE_DATA_FILE value: /run/secrets/credentials-overwrite.json volumeMounts: - name: credentials-overwrite mountPath: /run/secrets/credentials-overwrite.json subPath: credentials-overwrite.json readOnly: true volumes: - name: credentials-overwrite csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: n8n-credentials-overwrite ``` **Rotating the secret:** To update the credentials, update the value in Secrets Manager: ``` aws secretsmanager update-secret \ --secret-id n8n/credentials-overwrite \ --secret-string '{"microsoftOutlookOAuth2Api":{"clientId":"NEW_CLIENT_ID","clientSecret":"NEW_CLIENT_SECRET"}}' ``` The CSI driver syncs the updated value on its polling interval (default two minutes). Restart the n8n pod for n8n to read the updated file, as n8n reads the credentials file at startup. ### Azure Key Vault (AKS) This approach uses the [Azure Key Vault Provider for the Secrets Store CSI Driver](https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver) to mount secrets from Azure Key Vault into the pod. **Prerequisites:** - Secrets Store CSI Driver and Azure Key Vault Provider addon enabled on the AKS cluster - An Azure Key Vault instance - A managed identity or service principal with access to the vault - Workload Identity enabled on the cluster (recommended over pod identity) **1. Create or use an existing Key Vault:** ``` az keyvault create \ --name n8n-credentials-vault \ --resource-group your-resource-group \ --location your-region ``` **2. Create the secret in Key Vault:** ``` az keyvault secret set \ --vault-name n8n-credentials-vault \ --name n8n-credentials-overwrite \ --value '{"microsoftOutlookOAuth2Api":{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}}' ``` **3. Set up Workload Identity:** Create a managed identity and establish the federated credential: ``` # Create a managed identity az identity create \ --name n8n-workload-identity \ --resource-group your-resource-group \ --location your-region # Get the identity client ID CLIENT_ID=$(az identity show \ --name n8n-workload-identity \ --resource-group your-resource-group \ --query clientId -o tsv) # Grant the identity access to the Key Vault az keyvault set-policy \ --name n8n-credentials-vault \ --secret-permissions get \ --spn "$CLIENT_ID" # Get the OIDC issuer URL for your cluster OIDC_ISSUER=$(az aks show \ --name your-cluster \ --resource-group your-resource-group \ --query "oidcIssuerProfile.issuerUrl" -o tsv) # Create the federated credential az identity credential create \ --name n8n-workload-identity \ --resource-group your-resource-group \ --issuer "$OIDC_ISSUER" \ --subject system:serviceaccount:your-namespace:n8n-sa \ --audience api://AzureADTokenExchange ``` **4. Create the Kubernetes ServiceAccount:** ``` apiVersion: v1 kind: ServiceAccount metadata: name: n8n-sa namespace: your-namespace annotations: azure.workload.identity/client-id: "YOUR_MANAGED_IDENTITY_CLIENT_ID" labels: azure.workload.identity/use: "true" ``` **5. Create the SecretProviderClass:** ``` apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: n8n-credentials-overwrite namespace: your-namespace spec: provider: azure parameters: usePodIdentity: "false" useWorkloadIdentity: "true" clientID: "YOUR_MANAGED_IDENTITY_CLIENT_ID" keyvaultName: "n8n-credentials-vault" objects: | array: - | objectName: n8n-credentials-overwrite objectType: secret objectAlias: credentials-overwrite.json tenantId: "YOUR_TENANT_ID" ``` **6. Update your n8n Deployment:** ``` apiVersion: apps/v1 kind: Deployment metadata: name: n8n namespace: your-namespace spec: template: spec: serviceAccountName: n8n-sa containers: - name: n8n image: docker.n8n.io/n8nio/n8n:latest env: - name: CREDENTIALS_OVERWRITE_DATA_FILE value: /run/secrets/credentials-overwrite.json volumeMounts: - name: credentials-overwrite mountPath: /run/secrets/credentials-overwrite.json subPath: credentials-overwrite.json readOnly: true volumes: - name: credentials-overwrite csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: n8n-credentials-overwrite ``` **Rotating the secret:** ``` az keyvault secret set \ --vault-name n8n-credentials-vault \ --name n8n-credentials-overwrite \ --value '{"microsoftOutlookOAuth2Api":{"clientId":"NEW_CLIENT_ID","clientSecret":"NEW_CLIENT_SECRET"}}' ``` The CSI driver syncs on its polling interval (default two minutes). Restart the n8n pod afterward for n8n to pick up the updated file. ### Google Secret Manager (GKE) This approach uses the [GCP provider for the Secrets Store CSI Driver](https://github.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp) to mount secrets from Google Secret Manager into the pod. **Prerequisites:** - A GKE cluster with Workload Identity Federation enabled - The Secret Manager API enabled on the project - A Google service account with the `secretmanager.secretAccessor` role **1. Enable the Secret Manager API:** ``` gcloud services enable secretmanager.googleapis.com \ --project your-project-id ``` **2. Create the secret:** ``` echo -n '{"microsoftOutlookOAuth2Api":{"clientId":"YOUR_CLIENT_ID","clientSecret":"YOUR_CLIENT_SECRET"}}' | \ gcloud secrets create n8n-credentials-overwrite \ --data-file=- \ --project your-project-id ``` **3. Set up Workload Identity Federation:** ``` # Create a Google service account gcloud iam service-accounts create n8n-secret-reader \ --display-name="n8n Secret Reader" \ --project your-project-id # Grant it access to the secret gcloud secrets add-iam-policy-binding n8n-credentials-overwrite \ --member="serviceAccount:n8n-secret-reader@your-project-id.iam.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" \ --project your-project-id # Bind the Kubernetes service account to the Google service account gcloud iam service-accounts add-iam-policy-binding \ n8n-secret-reader@your-project-id.iam.gserviceaccount.com \ --role="roles/iam.workloadIdentityUser" \ --member="serviceAccount:your-project-id.svc.id.goog[your-namespace/n8n-sa]" ``` **4. Create the Kubernetes ServiceAccount:** ``` apiVersion: v1 kind: ServiceAccount metadata: name: n8n-sa namespace: your-namespace annotations: iam.gke.io/gcp-service-account: n8n-secret-reader@your-project-id.iam.gserviceaccount.com ``` **5. Install the CSI Driver and GCP provider:** ``` # Install the CSI driver helm repo add secrets-store-csi-driver https://kubernetes-sigs.github.io/secrets-store-csi-driver/charts helm install csi-secrets-store secrets-store-csi-driver/secrets-store-csi-driver \ --namespace kube-system # Install the GCP provider kubectl apply -f https://raw.githubusercontent.com/GoogleCloudPlatform/secrets-store-csi-driver-provider-gcp/main/deploy/provider-gcp-plugin.yaml ``` **6. Create the SecretProviderClass:** ``` apiVersion: secrets-store.csi.x-k8s.io/v1 kind: SecretProviderClass metadata: name: n8n-credentials-overwrite namespace: your-namespace spec: provider: gcp parameters: secrets: | - resourceName: "projects/your-project-id/secrets/n8n-credentials-overwrite/versions/latest" path: "credentials-overwrite.json" ``` **7. Update your n8n Deployment:** ``` apiVersion: apps/v1 kind: Deployment metadata: name: n8n namespace: your-namespace spec: template: spec: serviceAccountName: n8n-sa containers: - name: n8n image: docker.n8n.io/n8nio/n8n:latest env: - name: CREDENTIALS_OVERWRITE_DATA_FILE value: /run/secrets/credentials-overwrite.json volumeMounts: - name: credentials-overwrite mountPath: /run/secrets/credentials-overwrite.json subPath: credentials-overwrite.json readOnly: true volumes: - name: credentials-overwrite csi: driver: secrets-store.csi.k8s.io readOnly: true volumeAttributes: secretProviderClass: n8n-credentials-overwrite ``` **Rotating the secret:** Create a new version of the secret: ``` echo -n '{"microsoftOutlookOAuth2Api":{"clientId":"NEW_CLIENT_ID","clientSecret":"NEW_CLIENT_SECRET"}}' | \ gcloud secrets versions add n8n-credentials-overwrite \ --data-file=- \ --project your-project-id ``` Because the SecretProviderClass references `versions/latest`, the CSI driver picks up the new version on its next sync. Restart the n8n pod for n8n to read the updated file. # Enable modules in Code node For security reasons, the Code node restricts importing modules. It's possible to lift that restriction for built-in and external modules by setting the following environment variables: - `NODE_FUNCTION_ALLOW_BUILTIN`: For built-in modules - `NODE_FUNCTION_ALLOW_EXTERNAL`: For external modules sourced from n8n/node_modules directory. External module support is disabled when an environment variable isn't set. ``` # Allows usage of all builtin modules export NODE_FUNCTION_ALLOW_BUILTIN=* # Allows usage of only crypto export NODE_FUNCTION_ALLOW_BUILTIN=crypto # Allows usage of only crypto and fs export NODE_FUNCTION_ALLOW_BUILTIN=crypto,fs # Allow usage of external npm modules. export NODE_FUNCTION_ALLOW_EXTERNAL=moment,lodash ``` If using Task Runners If n8n instance is setup with [Task Runners](../../task-runners/), add the environment variables to the Task Runners instead to the main n8n node. Refer to [Environment variables reference](../../environment-variables/nodes/) for more information on these variables. # Enable Prometheus metrics To collect and expose metrics, n8n uses the [prom-client](https://www.npmjs.com/package/prom-client) library. The `/metrics` endpoint is disabled by default, but it's possible to enable it using the `N8N_METRICS` environment variable. ``` export N8N_METRICS=true ``` Refer to the respective [Environment Variables](../../environment-variables/endpoints/) (`N8N_METRICS_INCLUDE_*`) for configuring which metrics and labels should get exposed. Both `main` and `worker` instances are able to expose metrics. ## Queue metrics To enable queue metrics, set the `N8N_METRICS_INCLUDE_QUEUE_METRICS` env var to `true`. You can adjust the refresh rate with `N8N_METRICS_QUEUE_METRICS_INTERVAL`. n8n gathers these metrics from Bull and exposes them on the main instances. On multi-main setups, when aggregating queries, you can identify the leader using the `instance_role_leader` gauge, set to `1` for the leader main and `0` otherwise. ``` # HELP n8n_scaling_mode_queue_jobs_active Current number of jobs being processed across all workers in scaling mode. # TYPE n8n_scaling_mode_queue_jobs_active gauge n8n_scaling_mode_queue_jobs_active 0 # HELP n8n_scaling_mode_queue_jobs_completed Total number of jobs completed across all workers in scaling mode since instance start. # TYPE n8n_scaling_mode_queue_jobs_completed counter n8n_scaling_mode_queue_jobs_completed 0 # HELP n8n_scaling_mode_queue_jobs_failed Total number of jobs failed across all workers in scaling mode since instance start. # TYPE n8n_scaling_mode_queue_jobs_failed counter n8n_scaling_mode_queue_jobs_failed 0 # HELP n8n_scaling_mode_queue_jobs_waiting Current number of enqueued jobs waiting for pickup in scaling mode. # TYPE n8n_scaling_mode_queue_jobs_waiting gauge n8n_scaling_mode_queue_jobs_waiting 0 ``` # Set the self-hosted instance timezone The default timezone is America/New_York. For instance, the Schedule node uses it to know at what time the workflow should start. To set a different default timezone, set `GENERIC_TIMEZONE` to the appropriate value. For example, if you want to set the timezone to Berlin (Germany): ``` export GENERIC_TIMEZONE=Europe/Berlin ``` You can find the name of your timezone [here](https://momentjs.com/timezone/). Refer to [Environment variables reference](../../environment-variables/timezone-localization/) for more information on this variable. # Specify user folder path n8n saves user-specific data like the encryption key, SQLite database file, and the ID of the tunnel (if used) in the subfolder `.n8n` of the user who started n8n. It's possible to overwrite the user-folder using an environment variable. ``` export N8N_USER_FOLDER=/home/jim/n8n ``` Refer to [Environment variables reference](../../environment-variables/deployment/) for more information on this variable. # Configure n8n webhooks with reverse proxy n8n creates the webhook URL by combining `N8N_PROTOCOL`, `N8N_HOST` and `N8N_PORT`. If n8n runs behind a reverse proxy, that won't work. That's because n8n runs internally on port 5678 but the reverse proxy exposes it to the web on port 443. When running n8n behind a reverse proxy, it's important to do the following: - set the webhook URL manually with the `WEBHOOK_URL` environment variable so that n8n can display it in the editor UI and register the correct webhook URLs with external services. - Set the `N8N_PROXY_HOPS` environment variable to `1`. - On the last proxy on the request path, set the following headers to pass on information about the initial request: - [`X-Forwarded-For`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-For) - [`X-Forwarded-Host`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-Host) - [`X-Forwarded-Proto`](https://developer.mozilla.org/en-US/docs/Web/HTTP/Reference/Headers/X-Forwarded-Proto) ``` export WEBHOOK_URL=https://n8n.example.com/ export N8N_PROXY_HOPS=1 ``` Refer to [Environment variables reference](../../environment-variables/endpoints/) for more information on this variable. # Environment variables overview This section lists the environment variables that you can use to change n8n's configuration settings when self-hosting n8n. For the complete and most up-to-date list of all environment variables, see the full reference in the n8n docs repository: [Full environment variables index](https://github.com/n8n-io/n8n-docs/tree/main/docs/hosting/configuration/environment-variables). File-based configuration You can provide a [configuration file](../configuration-methods/) for n8n. You can also append `_FILE` to certain variables to provide their configuration in a separate file. - [AI Assistant](/hosting/configuration/environment-variables/ai-assistant/) - [Binary data](/hosting/configuration/environment-variables/binary-data/) - [Credentials](/hosting/configuration/environment-variables/credentials/) - [Database](/hosting/configuration/environment-variables/database/) - [Deployment](/hosting/configuration/environment-variables/deployment/) - [Endpoints](/hosting/configuration/environment-variables/endpoints/) - [Executions](/hosting/configuration/environment-variables/executions/) - [External data storage](/hosting/configuration/environment-variables/external-data-storage/) - [External hooks](/hosting/configuration/environment-variables/external-hooks/) - [External secrets](/hosting/configuration/environment-variables/external-secrets/) - [Insights](/hosting/configuration/environment-variables/insights/) - [Logs](/hosting/configuration/environment-variables/logs/) - [License](/hosting/configuration/environment-variables/licenses/) - [Nodes](/hosting/configuration/environment-variables/nodes/) - [OpenTelemetry](/hosting/configuration/environment-variables/opentelemetry/) - [Queue mode](/hosting/configuration/environment-variables/queue-mode/) - [Security](/hosting/configuration/environment-variables/security/) - [SSO](/hosting/configuration/environment-variables/sso/) - [SSRF protection](/hosting/configuration/environment-variables/ssrf-protection/) - [Source control](/hosting/configuration/environment-variables/source-control/) - [Task runners](/hosting/configuration/environment-variables/task-runners/) - [Timezone and localization](/hosting/configuration/environment-variables/timezone-localization/) - [User management and 2FA](/hosting/configuration/environment-variables/user-management-smtp-2fa/) - [Workflows](/hosting/configuration/environment-variables/workflows/) - [Workflow history](/hosting/configuration/environment-variables/workflow-history/) # AI Assistant environment variables | Variable | Type | Default | Description | | --------------------------- | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_AI_ASSISTANT_BASE_URL` | String | (empty) | Base URL of the AI assistant service, specified as `https://ai-assistant.n8n.io`. Required if you self-host n8n and want to enable the AI Assistant. | # Binary data environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. By default, n8n uses memory to store binary data. Enterprise users can choose to use an external service instead. Refer to [External storage](../../../scaling/external-storage/) for more information on using external storage for binary data. | Variable | Type | Default | Description | | --------------------------------- | ------ | ---------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `N8N_AVAILABLE_BINARY_DATA_MODES` | String | `filesystem` | A comma separated list of available binary data modes. | | `N8N_BINARY_DATA_STORAGE_PATH` | String | `N8N_USER_FOLDER/binaryData` | The path where n8n stores binary data. | | `N8N_DEFAULT_BINARY_DATA_MODE` | String | `default` | The default binary data mode. `default` keeps binary data in memory. Set to `filesystem` to use the filesystem, or `s3` to AWS S3, or `database` to use the DB. Note that binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future. | # Credentials environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. Enable credential overwrites using the following environment variables. Refer to [Credential overwrites](../../credential-overwrites/) for details. | Variable | Type | Default | Description | | ------------------------------------- | ------- | ---------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `CREDENTIALS_OVERWRITE_DATA` /`_FILE` | \* | - | Overwrites for credentials. | | `CREDENTIALS_OVERWRITE_ENDPOINT` | String | - | The API endpoint to fetch credentials. | | `CREDENTIALS_OVERWRITE_PERSISTENCE` | Boolean | `false` | Enable database persistence for credential overwrites. Required for multi-instance or queue mode to propagate overwrites to workers through a publish/subscribe approach. | | `CREDENTIALS_DEFAULT_NAME` | String | `My credentials` | The default name for credentials. | # Database environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. By default, n8n uses SQLite. n8n also supports PostgreSQL. n8n [deprecated support for MySQL and MariaDB](../../../../1-0-migration-checklist/#mysql-and-mariadb) in v1.0. This page outlines environment variables to configure your chosen database for your self-hosted n8n instance. | Variable | Type | Default | Description | | -------------------------- | ----------------------------------- | -------- | -------------------------------------------------------------------------------------------------- | | `DB_TYPE` /`_FILE` | Enum string: `sqlite`, `postgresdb` | `sqlite` | The database to use. | | `DB_TABLE_PREFIX` | \* | - | Prefix to use for table names. | | `DB_PING_INTERVAL_SECONDS` | Number | `2` | The interval, in seconds, between pings to the database to check if the connection is still alive. | ## PostgreSQL | Variable | Type | Default | Description | | ------------------------------------------------ | ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `DB_POSTGRESDB_DATABASE` /`_FILE` | String | `n8n` | The name of the PostgreSQL database. | | `DB_POSTGRESDB_HOST` /`_FILE` | String | `localhost` | The PostgreSQL host. | | `DB_POSTGRESDB_PORT` /`_FILE` | Number | `5432` | The PostgreSQL port. | | `DB_POSTGRESDB_USER` /`_FILE` | String | `postgres` | The PostgreSQL user. | | `DB_POSTGRESDB_PASSWORD` /`_FILE` | String | - | The PostgreSQL password. | | `DB_POSTGRESDB_POOL_SIZE` /`_FILE` | Number | `2` | Control how many parallel open Postgres connections n8n should have. Increasing it may help with resource utilization, but too many connections may degrade performance. | | `DB_POSTGRESDB_CONNECTION_TIMEOUT` /`_FILE` | Number | `20000` | Postgres connection timeout (ms). | | `DB_POSTGRESDB_IDLE_CONNECTION_TIMEOUT` /`_FILE` | Number | `30000` | Amount of time before an idle connection is eligible for eviction for being idle. | | `DB_POSTGRESDB_SCHEMA` /`_FILE` | String | `public` | The PostgreSQL schema. | | `DB_POSTGRESDB_SSL_ENABLED` /`_FILE` | Boolean | `false` | Whether to enable SSL. Automatically enabled if `DB_POSTGRESDB_SSL_CA`, `DB_POSTGRESDB_SSL_CERT` or `DB_POSTGRESDB_SSL_KEY` is defined. | | `DB_POSTGRESDB_SSL_CA` /`_FILE` | String | - | The PostgreSQL SSL certificate authority. | | `DB_POSTGRESDB_SSL_CERT` /`_FILE` | String | - | The PostgreSQL SSL certificate. | | `DB_POSTGRESDB_SSL_KEY` /`_FILE` | String | - | The PostgreSQL SSL key. | | `DB_POSTGRESDB_SSL_REJECT_UNAUTHORIZED` /`_FILE` | Boolean | `true` | If n8n should reject unauthorized SSL connections (true) or not (false). | ## SQLite | Variable | Type | Default | Description | | ----------------------------- | ------- | ------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `DB_SQLITE_POOL_SIZE` | Number | `0` | Controls whether to open the SQLite file in [WAL mode](https://www.sqlite.org/wal.html) or [rollback journal mode](https://www.sqlite.org/lockingv3.html#rollback). Uses rollback journal mode when set to zero. When greater than zero, uses WAL mode with the value determining the number of parallel SQL read connections to configure. WAL mode is much more performant and reliable than the rollback journal mode. | | `DB_SQLITE_VACUUM_ON_STARTUP` | Boolean | `false` | Runs [VACUUM](https://www.sqlite.org/lang_vacuum.html) operation on startup to rebuild the database. Reduces file size and optimizes indexes. This is a long running blocking operation and increases start-up time. | # Deployment environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. This page lists the deployment configuration options for your self-hosted n8n instance, including setting up access URLs, enabling templates, customizing encryption, and configuring server details. Proxy variable priorities The [`proxy-from-env`](https://www.npmjs.com/package/proxy-from-env) package that n8n uses to handle proxy environment variables (those ending with `_PROXY`) imposes a certain variable precedence. Notably, for proxy variables, lowercase versions (like `http_proxy`) have precedence over uppercase variants (for example `HTTP_PROXY`) when both are present. To learn more about proxy environment variables, check the [environment variables section of the package details](https://www.npmjs.com/package/proxy-from-env#environment-variables). | Variable | Type | Default | Description | | -------------------------------------- | ---------------------------- | ---------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `HTTP_PROXY` | String | - | A URL to proxy unencrypted HTTP requests through. When set, n8n proxies all unencrypted HTTP traffic from nodes through the proxy URL. | | `HTTPS_PROXY` | String | - | A URL to proxy TLS/SSL encrypted HTTP requests through. When set, n8n proxies all TLS/SSL encrypted HTTP traffic from nodes through the proxy URL. | | `ALL_PROXY` | String | - | A URL to proxy both unencrypted and encrypted HTTP requests through. When set, n8n uses this value when more specific variables (`HTTP_PROXY` or `HTTPS_PROXY`) aren't present. | | `NO_PROXY` | String | - | A comma-separated list of hostnames or URLs that should bypass the proxy. When using `HTTP_PROXY`, `HTTPS_PROXY`, or `ALL_PROXY`, n8n will connect directly to the URLs or hostnames defined here instead of using the proxy. | | `N8N_ENFORCE_GLOBAL_USER_AGENT` | Boolean | `false` | When set to `true`, n8n replaces the default bare `n8n` User-Agent string with an RFC-compliant value (`Mozilla/5.0 (compatible; n8n/; +https://n8n.io/)`) on all outbound HTTP requests. Enable this to prevent web application firewalls from blocking n8n requests. | | `N8N_GLOBAL_USER_AGENT_VALUE` | String | - | A custom User-Agent string to use for all outbound HTTP requests. Overrides the RFC-compliant default set by `N8N_ENFORCE_GLOBAL_USER_AGENT`. Useful when you don't want to disclose the n8n version to upstream servers. | | `N8N_EDITOR_BASE_URL` | String | - | Public URL where users can access the editor. Also used for emails sent from n8n and the redirect URL for SAML based authentication. | | `N8N_DISABLE_UI` | Boolean | `false` | Set to `true` to disable the UI. | | `N8N_PREVIEW_MODE` | Boolean | `false` | Set to `true` to run in preview mode. | | `N8N_TEMPLATES_ENABLED` | Boolean | `false` | Enables [workflow templates](../../../../glossary/#template-n8n) (true) or disable (false). | | `N8N_TEMPLATES_HOST` | String | `https://api.n8n.io` | Change this if creating your own workflow template library. Note that to use your own workflow templates library, your API must provide the same endpoints and response structure as n8n's. Refer to [Workflow templates](../../../../workflows/templates/) for more information. | | `N8N_ENCRYPTION_KEY` | String | Random key generated by n8n | Provide a custom key used to encrypt credentials in the n8n database. By default n8n generates a random key on first launch. | | `N8N_ENV_FEAT_ENCRYPTION_KEY_ROTATION` | Boolean | `false` | Set to `true` on all instances (main and workers) to enable [encryption key rotation](../../../securing/encryption-key-rotation/). One-way change: take a full database backup first. | | `N8N_USER_FOLDER` | String | `user-folder` | Provide the path where n8n will create the `.n8n` folder. This directory stores user-specific data, such as database file and encryption key. | | `N8N_PATH` | String | `/` | The path n8n deploys to. Combining `N8N_PATH` with reverse proxies can cause folder navigation issues. Use a subdomain (for example, `n8n.example.com`) or use `N8N_PATH` without reverse proxy. | | `N8N_HOST` | String | `localhost` | Host name n8n runs on. | | `N8N_PORT` | Number | `5678` | The HTTP port n8n runs on. | | `N8N_LISTEN_ADDRESS` | String | `::` | The IP address n8n should listen on. | | `N8N_PROTOCOL` | Enum string: `http`, `https` | `http` | The protocol used to reach n8n. | | `N8N_SSL_KEY` | String | - | The SSL key for HTTPS protocol. | | `N8N_SSL_CERT` | String | - | The SSL certificate for HTTPS protocol. | | `N8N_PERSONALIZATION_ENABLED` | Boolean | `true` | Whether to ask users personalisation questions and then customise n8n accordingly. | | `N8N_VERSION_NOTIFICATIONS_ENABLED` | Boolean | `true` | When enabled, n8n sends notifications of new versions and security updates. | | `N8N_VERSION_NOTIFICATIONS_ENDPOINT` | String | `https://api.n8n.io/versions/` | The endpoint to retrieve where version information. | | `N8N_VERSION_NOTIFICATIONS_INFO_URL` | String | `https://docs.n8n.io/getting-started/installation/updating.html` | The URL displayed in the New Versions panel for more information. | | `N8N_DIAGNOSTICS_ENABLED` | Boolean | `true` | Whether to share selected, anonymous [telemetry](../../../../privacy-security/privacy/) with n8n. Note that if you set this to `false`, you can't enable Ask AI in the Code node. | | `N8N_DIAGNOSTICS_CONFIG_FRONTEND` | String | `1zPn9bgWPzlQc0p8Gj1uiK6DOTn;https://telemetry.n8n.io` | Telemetry configuration for the frontend. | | `N8N_DIAGNOSTICS_CONFIG_BACKEND` | String | `1zPn7YoGC3ZXE9zLeTKLuQCB4F6;https://telemetry.n8n.io/v1/batch` | Telemetry configuration for the backend. | | `N8N_PUSH_BACKEND` | String | `websocket` | Choose whether the n8n backend uses server-sent events (`sse`) or WebSockets (`websocket`) to send changes to the UI. | | `VUE_APP_URL_BASE_API` | String | `http://localhost:5678/` | Used when building the `n8n-editor-ui` package manually to set how the frontend can reach the backend API. Refer to [Configure the Base URL](../../configuration-examples/base-url/). | | `N8N_HIRING_BANNER_ENABLED` | Boolean | `true` | Whether to show the n8n hiring banner in the console (true) or not (false). | | `N8N_PUBLIC_API_SWAGGERUI_DISABLED` | Boolean | `false` | Whether the Swagger UI (API playground) is disabled (true) or not (false). | | `N8N_PUBLIC_API_DISABLED` | Boolean | `false` | Whether to disable the public API (true) or not (false). | | `N8N_PUBLIC_API_ENDPOINT` | String | `api` | Path for the public API endpoints. | | `N8N_GRACEFUL_SHUTDOWN_TIMEOUT` | Number | `30` | How long should the n8n process wait (in seconds) for components to shut down before exiting the process. | | `N8N_DEV_RELOAD` | Boolean | `false` | When working on the n8n source code, set this to `true` to automatically reload or restart the application when changes occur in the source code files. | | `N8N_REINSTALL_MISSING_PACKAGES` | Boolean | `false` | If set to `true`, n8n will automatically attempt to reinstall any missing packages. | | `N8N_TUNNEL_SUBDOMAIN` | String | - | Specifies the subdomain for the n8n tunnel. If not set, n8n generates a random subdomain. | | `N8N_PROXY_HOPS` | Number | 0 | Number of reverse-proxies n8n is running behind. | # Endpoints environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. This page lists environment variables for customizing endpoints in n8n. | Variable | Type | Default | Description | | ----------------------------------------------- | ------- | ----------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_PAYLOAD_SIZE_MAX` | Number | `16` | The maximum payload size in MiB. | | `N8N_FORMDATA_FILE_SIZE_MAX` | Number | `200` | Max payload size for files in form-data webhook payloads in MiB. | | `N8N_METRICS` | Boolean | `false` | Whether to enable the `/metrics` endpoint. | | `N8N_METRICS_PREFIX` | String | `n8n_` | Optional prefix for n8n specific metrics names. | | `N8N_METRICS_INCLUDE_DEFAULT_METRICS` | Boolean | `true` | Whether to expose default system and node.js metrics. | | `N8N_METRICS_INCLUDE_CACHE_METRICS` | Boolean | false | Whether to include metrics (true) for cache hits and misses, or not include them (false). | | `N8N_METRICS_INCLUDE_MESSAGE_EVENT_BUS_METRICS` | Boolean | `false` | Whether to include metrics (true) for events, or not include them (false). | | `N8N_METRICS_INCLUDE_WORKFLOW_ID_LABEL` | Boolean | `false` | Whether to include a label for the workflow ID on workflow metrics. | | `N8N_METRICS_INCLUDE_NODE_TYPE_LABEL` | Boolean | `false` | Whether to include a label for the node type on node metrics. | | `N8N_METRICS_INCLUDE_CREDENTIAL_TYPE_LABEL` | Boolean | `false` | Whether to include a label for the credential type on credential metrics. | | `N8N_METRICS_INCLUDE_API_ENDPOINTS` | Boolean | `false` | Whether to expose metrics for API endpoints. | | `N8N_METRICS_INCLUDE_API_PATH_LABEL` | Boolean | `false` | Whether to include a label for the path of API invocations. | | `N8N_METRICS_INCLUDE_API_METHOD_LABEL` | Boolean | `false` | Whether to include a label for the HTTP method (GET, POST, ...) of API invocations. | | `N8N_METRICS_INCLUDE_API_STATUS_CODE_LABEL` | Boolean | `false` | Whether to include a label for the HTTP status code (200, 404, ...) of API invocations. | | `N8N_METRICS_INCLUDE_QUEUE_METRICS` | Boolean | `false` | Whether to include metrics for jobs in scaling mode. | | `N8N_METRICS_QUEUE_METRICS_INTERVAL` | Integer | `20` | How often (in seconds) to update queue metrics. | | `N8N_ENDPOINT_REST` | String | `rest` | The path used for REST endpoint. | | `N8N_ENDPOINT_WEBHOOK` | String | `webhook` | The path used for webhook endpoint. | | `N8N_ENDPOINT_WEBHOOK_TEST` | String | `webhook-test` | The path used for test-webhook endpoint. | | `N8N_ENDPOINT_WEBHOOK_WAIT` | String | `webhook-waiting` | The path used for waiting-webhook endpoint. | | `N8N_ENDPOINT_HEALTH` | String | `healthz` | The path used for health check endpoint. | | `WEBHOOK_URL` | String | - | Used to manually provide the Webhook URL when running n8n behind a reverse proxy. See [here](../../configuration-examples/webhook-url/) for more details. | | `N8N_DISABLE_PRODUCTION_MAIN_PROCESS` | Boolean | `false` | Disable production webhooks from main process. This helps ensure no HTTP traffic load to main process when using webhook-specific processes. | # Executions environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. This page lists environment variables to configure workflow execution settings. | Variable | Type | Default | Description | | --------------------------------------------------- | ------------------------------- | --------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `EXECUTIONS_MODE` | Enum string: `regular`, `queue` | `regular` | Whether executions should run directly or using queue. Refer to [Queue mode](../../../scaling/queue-mode/) for more details. | | `EXECUTIONS_TIMEOUT` | Number | `-1` | Sets a default timeout (in seconds) to all workflows after which n8n stops their execution. Users can override this for individual workflows up to the duration set in `EXECUTIONS_TIMEOUT_MAX`. Set `EXECUTIONS_TIMEOUT` to `-1` to disable. | | `EXECUTIONS_TIMEOUT_MAX` | Number | `3600` | The maximum execution time (in seconds) that users can set for an individual workflow. | | `N8N_AI_TIMEOUT_MAX` | Number | `3600000` | Sets the HTTP request timeout in milliseconds for AI and LLM nodes (such as OpenAI, Anthropic, Mistral, and Ollama). This controls how long n8n waits for responses from AI services before timing out. Useful for slower local AI services or complex prompts that require longer processing time. | | `EXECUTIONS_DATA_SAVE_ON_ERROR` | Enum string: `all`, `none` | `all` | Whether n8n saves execution data on error. | | `EXECUTIONS_DATA_SAVE_ON_SUCCESS` | Enum string: `all`, `none` | `all` | Whether n8n saves execution data on success. | | `EXECUTIONS_DATA_SAVE_ON_PROGRESS` | Boolean | `false` | Whether to save progress for each node executed (true) or not (false). | | `EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS` | Boolean | `true` | Whether to save data of executions when started manually. | | `EXECUTIONS_DATA_PRUNE` | Boolean | `true` | Whether to delete data of past executions on a rolling basis. | | `EXECUTIONS_DATA_MAX_AGE` | Number | `336` | The execution age (in hours) before it's deleted. | | `EXECUTIONS_DATA_PRUNE_MAX_COUNT` | Number | `10000` | Maximum number of executions to keep in the database. 0 = no limit | | `EXECUTIONS_DATA_HARD_DELETE_BUFFER` | Number | `1` | How old (hours) the finished execution data has to be to get hard-deleted. By default, this buffer excludes recent executions as the user may need them while building a workflow. | | `EXECUTIONS_DATA_PRUNE_HARD_DELETE_INTERVAL` | Number | `15` | How often (minutes) execution data should be hard-deleted. | | `EXECUTIONS_DATA_PRUNE_SOFT_DELETE_INTERVAL` | Number | `60` | How often (minutes) execution data should be soft-deleted. | | `N8N_CONCURRENCY_PRODUCTION_LIMIT` | Number | `-1` | Max production executions allowed to run concurrently, in both regular and scaling modes. `-1` to disable in regular mode. | | `N8N_WORKFLOW_AUTODEACTIVATION_ENABLED` | Boolean | `false` | Whether workflows are automatically unpublished after repeated crashed executions. | | `N8N_WORKFLOW_AUTODEACTIVATION_MAX_LAST_EXECUTIONS` | Number | `3` | Number of crashed executions before unpublishing a workflow. | # External data storage environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. Refer to [External storage](../../../scaling/external-storage/) for more information on using external storage for binary data. | Variable | Type | Default | Description | | ------------------------------------------ | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_EXTERNAL_STORAGE_S3_HOST` | String | - | Host of the n8n bucket in S3-compatible external storage. For example, `s3.us-east-1.amazonaws.com` | | `N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME` | String | - | Name of the n8n bucket in S3-compatible external storage. | | `N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION` | String | - | Region of the n8n bucket in S3-compatible external storage. For example, `us-east-1` | | `N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY` | String | - | Access key in S3-compatible external storage | | `N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET` | String | - | Access secret in S3-compatible external storage. | | `N8N_EXTERNAL_STORAGE_S3_AUTH_AUTO_DETECT` | Boolean | - | Use automatic credential detection to authenticate S3 calls for external storage. This will ignore the access key and access secret and use the default [credential provider chain](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html#credchain). | # External hooks environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. You can define external hooks that n8n executes whenever a specific operation runs. Refer to [External hooks](../../external-hooks/) for the full reference, including available hooks and file formatting. | Variable | Type | Default | Description | | ------------------------------- | ------ | ------- | -------------------------------------------------------------------------------------------------------------------------------------- | | `EXTERNAL_HOOK_FILES` | String | - | Files containing backend external hooks. Provide multiple files separated by the character defined in `EXTERNAL_HOOK_FILES_SEPARATOR`. | | `EXTERNAL_HOOK_FILES_SEPARATOR` | String | `:` | Separator character for `EXTERNAL_HOOK_FILES`. Use `;` on Windows to avoid conflicts with drive-letter paths like `C:\`. | | `EXTERNAL_FRONTEND_HOOKS_URLS` | String | - | URLs to files containing frontend external hooks. Provide multiple URLs as a colon-separated list ("`:`"). | # External secrets environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. You can use an external secrets store to manage credentials for n8n. Refer to [External secrets](../../../../external-secrets/) for details. | Variable | Type | Default | Description | | -------------------------------------- | ------ | ----------------- | --------------------------------------------------- | | `N8N_EXTERNAL_SECRETS_UPDATE_INTERVAL` | Number | `300` (5 minutes) | How often (in seconds) to check for secret updates. | # Insights environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. Insights gives instance owners and admins visibility into how workflows perform over time. Refer to [Insights](../../../../insights/) for details. Storage and compaction thresholds `N8N_INSIGHTS_COMPACTION_HOURLY_TO_DAILY_THRESHOLD_DAYS` and `N8N_INSIGHTS_COMPACTION_DAILY_TO_WEEKLY_THRESHOLD_DAYS` set how many days n8n keeps high-resolution insights (metrics stored in one-hour buckets) before each compaction step (from the hour bucket to the day bucket, then from the day bucket to the week bucket). You configure those day counts on your instance. Raising the values delays compaction. That adds more rows to `insights_by_period` and increases database usage. For how this relates to retention, see [Insights](../../../../insights/#disable-or-configure-insights-metrics-collection). | Variable | Type | Default | Description | | -------------------------------------------------------- | ------ | ------- | ----------------------------------------------------------------------------------------------------------- | | `N8N_DISABLED_MODULES` | String | - | Set to `insights` to disable the feature and metrics collection for an instance. | | `N8N_INSIGHTS_COMPACTION_BATCH_SIZE` | Number | 500 | The number of raw insights data to compact in a single batch. | | `N8N_INSIGHTS_COMPACTION_DAILY_TO_WEEKLY_THRESHOLD_DAYS` | Number | 180 | Age in days after which n8n compacts daily insights rows to weekly. | | `N8N_INSIGHTS_COMPACTION_HOURLY_TO_DAILY_THRESHOLD_DAYS` | Number | 90 | Age in days after which n8n compacts hourly insights rows to daily. | | `N8N_INSIGHTS_COMPACTION_INTERVAL_MINUTES` | Number | 60 | Interval (in minutes) at which compaction should run. | | `N8N_INSIGHTS_FLUSH_BATCH_SIZE` | Number | 1000 | The maximum number of insights data to keep in the buffer before flushing. | | `N8N_INSIGHTS_FLUSH_INTERVAL_SECONDS` | Number | 30 | The interval (in seconds) at which n8n flushes insights data to the database. | | `N8N_INSIGHTS_MAX_AGE_DAYS` | Number | 365 | Number of days n8n keeps compacted insights data before pruning. The maximum value is 730 (two years). | | `N8N_INSIGHTS_PRUNE_CHECK_INTERVAL_HOURS` | Number | 24 | How often (in hours) the instance checks for insights data older than the effective max age and deletes it. | # License environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. To enable certain licensed features, you must first activate your license. You can do this either through the UI or by setting environment variables. For more information, see [license key](../../../../license-key/). | Variable | Type | Default | Description | | ----------------------------------------- | ------- | ------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_HIDE_USAGE_PAGE` | boolean | `false` | Hide the usage and plans page in the app. | | `N8N_LICENSE_ACTIVATION_KEY` | String | `''` | Activation key to initialize license. Not applicable if the n8n instance was already activated. | | `N8N_LICENSE_AUTO_RENEW_ENABLED` | Boolean | `true` | Enables (true) or disables (false) autorenewal for licenses. If disabled, you need to manually renew the license every 10 days by navigating to **Settings** > **Usage and plan**, and pressing `F5`. Failure to renew the license will disable all licensed features. | | `N8N_LICENSE_DETACH_FLOATING_ON_SHUTDOWN` | Boolean | `true` | Controls whether the instance releases [floating entitlements](../../../../glossary/#entitlement-n8n) back to the pool upon shutdown. Set to `true` to allow other instances to reuse the entitlements, or `false` to retain them. For production instances that must always keep their licensed features, set this to `false`. | | `N8N_LICENSE_SERVER_URL` | String | `https://license.n8n.io/v1` | Server URL to retrieve license. | | `N8N_LICENSE_TENANT_ID` | Number | `1` | Tenant ID associated with the license. Only set this variable if explicitly instructed by n8n. | | `https_proxy_license_server` | String | `https://user:pass@proxy:port` | Proxy server URL for HTTPS requests to retrieve license. This variable name needs to be lowercase. | # Logs environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. This page lists environment variables to set up logging for debugging. Refer to [Logging in n8n](../../../logging-monitoring/logging/) for details. ## n8n logs | Variable | Type | Default | Description | | ------------------------------- | -------------------------------------------------------------- | ----------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_LOG_LEVEL` | Enum string: `info`, `warn`, `error`, `debug` | `info` | Log output level. Refer to [Log levels](../../../logging-monitoring/logging/#log-levels) for details. | | `N8N_LOG_OUTPUT` | Enum string: `console`, `file` | `console` | Where to output logs. Provide multiple values as a comma-separated list. | | `N8N_LOG_FORMAT` | Enum string: `text`, `json` | `text` | The log format to use. `text` prints human readable messages. `json` prints one JSON object per line containing the message, level, timestamp, and all metadata. This is useful for production monitoring as well as debugging. | | `N8N_LOG_CRON_ACTIVE_INTERVAL` | Number | `0` | Interval in minutes to log currently active cron jobs. Set to `0` to disable. | | `N8N_LOG_FILE_COUNT_MAX` | Number | `100` | Max number of log files to keep. | | `N8N_LOG_FILE_SIZE_MAX` | Number | `16` | Max size of each log file in MB. | | `N8N_LOG_FILE_LOCATION` | String | `/logs/n8n.log` | Log file location. Requires N8N_LOG_OUTPUT set to `file`. | | `DB_LOGGING_ENABLED` | Boolean | `false` | Whether to enable database-specific logging. | | `DB_LOGGING_OPTIONS` | Enum string: `query`, `error`, `schema`, `warn`, `info`, `log` | `error` | Database log output level. To enable all logging, specify `all`. Refer to [TypeORM logging options](https://orkhan.gitbook.io/typeorm/docs/docs/advanced-topics/5-logging#logging-options) | | `DB_LOGGING_MAX_EXECUTION_TIME` | Number | `1000` | Maximum execution time (in milliseconds) before n8n logs a warning. Set to `0` to disable long running query warning. | | `CODE_ENABLE_STDOUT` | Boolean | `false` | Set to `true` to send Code node logs from `console.log` or `print` to the process's stdout, only for production executions. | | `NO_COLOR` | any | `undefined` | Set to any value to output logs without ANSI colors. For more information, see the [no-color.org website](https://no-color.org/). | ## Log streaming Refer to [Log streaming](../../../../log-streaming/) for more information on this feature. | Variable | Type | Default | Description | | ------------------------------------------------ | ------- | ------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_EVENTBUS_CHECKUNSENTINTERVAL` | Number | `0` | How often (in milliseconds) to check for unsent event messages. Can in rare cases send message twice. Set to `0` to disable it. | | `N8N_EVENTBUS_LOGWRITER_SYNCFILEACCESS` | Boolean | `false` | Whether all file access happens synchronously within the thread (true) or not (false). | | `N8N_EVENTBUS_LOGWRITER_KEEPLOGCOUNT` | Number | `3` | Number of event log files to keep. | | `N8N_EVENTBUS_LOGWRITER_MAXFILESIZEINKB` | Number | `10240` | Maximum size (in kilo-bytes) of an event log file before a new one starts. | | `N8N_EVENTBUS_LOGWRITER_LOGBASENAME` | String | `n8nEventLog` | Basename of the event log file. Ignored when `N8N_EVENTBUS_LOGWRITER_LOGFULLPATH` is set. | | `N8N_EVENTBUS_LOGWRITER_LOGFULLPATH` | String | `''` | Absolute path to the event log file. Must end in `.log`. When set, this path is used verbatim and overrides `N8N_EVENTBUS_LOGWRITER_LOGBASENAME` and the default per-process suffix. Use this to give each n8n process a unique event log path when multiple processes share a writable filesystem. Refer to [Per-process event log files](../../../../log-streaming/#per-process-event-log-files) for details. | | `N8N_EVENTBUS_LOGWRITER_MAXTOTALMESSAGESPERFILE` | Number | `500000` | Maximum number of lines parsed from a single event log file during recovery. Bounds memory use when an event log file contains many invalid lines. | ### Manage log streaming destinations from environment variables Set `N8N_LOG_STREAMING_MANAGED_BY_ENV` to `true` to manage log streaming destinations from environment variables. See [Manage instance settings using environment variables](../../settings-env-vars/) for how the activation pattern works, and [Configure log streaming destinations using environment variables](../../../../log-streaming/#configure-using-environment-variables) for the per-destination JSON shape. | Variable | Type | Default | Description | | ---------------------------------- | ----------- | ------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_LOG_STREAMING_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage log streaming from environment variables. When `true`, n8n applies the log streaming variables on every startup and locks the matching UI controls. | | `N8N_LOG_STREAMING_DESTINATIONS` | JSON string | - | JSON array of log streaming destinations. Each destination is an object with a `type` of `webhook`, `syslog`, or `sentry`, plus the configuration for that type. | # Nodes environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. This page lists the environment variables configuration options for managing [nodes](../../../../glossary/#node-n8n) in n8n, including specifying which nodes to load or exclude, importing built-in or external modules in the Code node, and enabling community nodes. ## Nodes and community node settings | Variable | Type | Default | Description | | ---------------------------------------- | ---------------- | -------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `N8N_COMMUNITY_PACKAGES_AUTH_TOKEN` | String | - | Authentication token for a private npm registry. Use with `N8N_COMMUNITY_PACKAGES_REGISTRY` to authenticate requests when installing community nodes from a private registry. | | `N8N_COMMUNITY_PACKAGES_ENABLED` | Boolean | `true` | Enables (true) or disables (false) the functionality to install and load community nodes. If set to false, neither verified nor unverified community packages will be available, regardless of their individual settings. | | `N8N_COMMUNITY_PACKAGES_PREVENT_LOADING` | Boolean | `false` | Prevents (true) or allows (false) loading installed community nodes on instance startup. Use this if a faulty node prevents the instance from starting. | | `N8N_COMMUNITY_PACKAGES_REGISTRY` | String | `https://registry.npmjs.org` | NPM registry URL to pull community packages from (license required). | | `N8N_CUSTOM_EXTENSIONS` | String | - | Specify the path to directories containing your custom nodes. | | `N8N_PYTHON_ENABLED` | Boolean | `true` | Whether to enable Python execution on the Code node. | | `N8N_UNVERIFIED_PACKAGES_ENABLED` | Boolean | `true` | When `N8N_COMMUNITY_PACKAGES_ENABLED` is true, this variable controls whether to enable the installation and use of unverified community nodes from an NPM registry (true) or not (false). | | `N8N_VERIFIED_PACKAGES_ENABLED` | Boolean | `true` | When `N8N_COMMUNITY_PACKAGES_ENABLED` is true, this variable controls whether to show verified community nodes in the nodes panel for installation and use (true) or to hide them (false). | | `NODE_FUNCTION_ALLOW_BUILTIN` | String | - | Permit users to import specific built-in modules in the Code node. Use * to allow all. n8n disables importing modules by default. | | `NODE_FUNCTION_ALLOW_EXTERNAL` | String | - | Permit users to import specific external modules (from `n8n/node_modules`) in the Code node. n8n disables importing modules by default. | | `NODES_ERROR_TRIGGER_TYPE` | String | `n8n-nodes-base.errorTrigger` | Specify which node type to use as Error Trigger. | | `NODES_EXCLUDE` | Array of strings | `[\"n8n-nodes-base.executeCommand\", \"n8n-nodes-base.localFileTrigger\"]` | Specify which nodes not to load. For example, to block nodes that can be a security risk if users aren't trustworthy: `NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\", \"@n8n/n8n-nodes-langchain.lmChatDeepSeek\"]"`. To enable all nodes, specify `NODES_EXCLUDE: "[]"`. | | `NODES_INCLUDE` | Array of strings | - | Specify which nodes to load. | ## Manage installed community packages Available from n8n v2.21.0 Pre-provision installed [community packages](../../../../integrations/community-nodes/installation/) from environment variables. See [Manage instance settings using environment variables](../../settings-env-vars/) for the `*_MANAGED_BY_ENV` pattern. | Variable | Type | Default | Description | | --------------------------------------- | ----------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_COMMUNITY_PACKAGES_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage installed community packages from environment variables. When `true`, n8n reconciles the installed packages against `N8N_COMMUNITY_PACKAGES` on every startup, installing missing packages, correcting versions, and **uninstalling any currently-installed packages that aren't in the list**. The **Community nodes** settings page becomes read-only: you can't install, update, or uninstall packages from the UI while this is enabled. | | `N8N_COMMUNITY_PACKAGES` | JSON string | - | JSON array of community packages to install. Each entry is an object with a `name` (required) and optional `version` and `checksum` fields. You can also embed the version in the name as `@`. See [environment variable installation](../../../../integrations/community-nodes/installation/env-install/) for the full per-field reference. | # OpenTelemetry environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. n8n can export workflow and node execution traces over OTLP to an OpenTelemetry collector. Refer to [OpenTelemetry tracing](../../../logging-monitoring/opentelemetry/) for details. | Variable | Type | Default | Description | Available from | | ------------------------------------------ | ------- | ----------------------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | -------------- | | `N8N_OTEL_ENABLED` | Boolean | `false` | Whether to enable OpenTelemetry tracing. When set to `false`, n8n doesn't load the OpenTelemetry SDK. | 2.19.0 | | `N8N_OTEL_EXPORTER_OTLP_ENDPOINT` | String | `http://localhost:4318` | Base URL of the OTLP HTTP endpoint to export traces to. n8n appends the value of `N8N_OTEL_EXPORTER_OTLP_TRACING_PATH` to this URL. | 2.19.0 | | `N8N_OTEL_EXPORTER_OTLP_TRACING_PATH` | String | `/v1/traces` | Path appended to `N8N_OTEL_EXPORTER_OTLP_ENDPOINT` for exporting traces. | 2.19.0 | | `N8N_OTEL_EXPORTER_OTLP_HEADERS` | String | - | Comma-separated list of `key=value` pairs to send as HTTP headers with each OTLP request. Use this for authentication tokens or tenant headers. For example: `authorization=Bearer ,x-tenant=acme`. | 2.19.0 | | `N8N_OTEL_EXPORTER_SERVICE_NAME` | String | `n8n` | Value of the `service.name` resource attribute on exported spans. | 2.19.0 | | `N8N_OTEL_TRACES_SAMPLE_RATE` | Number | `1.0` | Fraction of traces to export, between `0` and `1`. n8n uses a trace ID ratio sampler, so all spans in a trace are either sampled or dropped together. | 2.19.0 | | `N8N_OTEL_TRACES_INCLUDE_NODE_SPANS` | Boolean | `true` | Whether to emit a `node.execute` span for each node execution. Set to `false` to export workflow-level spans only. | 2.19.0 | | `N8N_OTEL_TRACES_INJECT_OUTBOUND` | Boolean | `true` | Whether to inject the W3C `traceparent`/`tracesstate` headers into outbound HTTP requests made by nodes that use n8n's HTTP helpers. | 2.19.0 | | `N8N_OTEL_STARTUP_CONNECTIVITY_TIMEOUT_MS` | Number | `2000` | Timeout (in milliseconds) for the startup connectivity check against the OTLP endpoint. n8n logs a warning if the endpoint isn't reachable within this period. | 2.19.0 | # Queue mode environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. You can run n8n in different modes depending on your needs. Queue mode provides the best scalability. Refer to [Queue mode](../../../scaling/queue-mode/) for more information. | Variable | Type | Default | Description | | --------------------------------------- | ------- | ----------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `OFFLOAD_MANUAL_EXECUTIONS_TO_WORKERS` | Boolean | `false` | Set to `true` if you want manual executions to run on the worker rather than on main. | | `QUEUE_BULL_PREFIX` | String | - | Prefix to use for all queue keys. | | `QUEUE_BULL_REDIS_DB` | Number | `0` | The Redis database used. | | `QUEUE_BULL_REDIS_HOST` | String | `localhost` | The Redis host. | | `QUEUE_BULL_REDIS_PORT` | Number | `6379` | The Redis port used. | | `QUEUE_BULL_REDIS_USERNAME` | String | - | The Redis username (needs Redis version 6 or above). Don't define it for Redis < 6 compatibility | | `QUEUE_BULL_REDIS_PASSWORD` | String | - | The Redis password. | | `QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD` | Number | `10000` | The Redis timeout threshold (in ms). | | `QUEUE_BULL_REDIS_CLUSTER_NODES` | String | - | Expects a comma-separated list of Redis Cluster nodes in the format `host:port`, for the Redis client to initially connect to. If running in queue mode (`EXECUTIONS_MODE = queue`), setting this variable will create a Redis Cluster client instead of a Redis client, and n8n will ignore `QUEUE_BULL_REDIS_HOST` and `QUEUE_BULL_REDIS_PORT`. | | `QUEUE_BULL_REDIS_TLS` | Boolean | `false` | Enable TLS on Redis connections. | | `QUEUE_BULL_REDIS_DUALSTACK` | Boolean | `false` | Enable dual-stack support (IPv4 and IPv6) on Redis connections. | | `QUEUE_WORKER_TIMEOUT` (**deprecated**) | Number | `30` | **Deprecated** Use `N8N_GRACEFUL_SHUTDOWN_TIMEOUT` instead. How long should n8n wait (seconds) for running executions before exiting worker process on shutdown. | | `QUEUE_HEALTH_CHECK_ACTIVE` | Boolean | `false` | Whether to enable health checks (true) or disable (false). | | `QUEUE_HEALTH_CHECK_PORT` | Number | 5678 | The port to serve health checks on. If you experience a port conflict error when starting a worker server using its default port, change this. | | `QUEUE_WORKER_LOCK_DURATION` | Number | `60000` | How long (in ms) is the lease period for a worker to work on a message. | | `QUEUE_WORKER_LOCK_RENEW_TIME` | Number | `10000` | How frequently (in ms) should a worker renew the lease time. | | `QUEUE_WORKER_STALLED_INTERVAL` | Number | `30000` | How often should a worker check for stalled jobs (use 0 for never). | | `QUEUE_WORKER_MAX_STALLED_COUNT` | Number | `1` | Maximum amount of times a stalled job will be re-processed. | ## Multi-main setup Refer to [Configuring multi-main setup](../../../scaling/queue-mode/#configuring-multi-main-setup) for details. | Variable | Type | Default | Description | | ------------------------------------- | ------- | ------- | --------------------------------------------------------------------- | | `N8N_MULTI_MAIN_SETUP_ENABLED` | Boolean | `false` | Whether to enable multi-main setup for queue mode (license required). | | `N8N_MULTI_MAIN_SETUP_KEY_TTL` | Number | `10` | Time to live (in seconds) for leader key in multi-main setup. | | `N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL` | Number | `3` | Interval (in seconds) for leader check in multi-main setup. | # Security environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. | Variable | Type | Default | Description | | -------------------------------------------- | ------------------------------------ | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_BLOCK_ENV_ACCESS_IN_NODE` | Boolean | `false` | Whether to allow users to access environment variables in expressions and the Code node (false) or not (true). | | `N8N_BLOCK_FILE_ACCESS_TO_N8N_FILES` | Boolean | `true` | Set to `true` to block access to all files in the `.n8n` directory and user defined configuration files. | | `N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS` | Boolean | `false` | Set to `true` to try to set 0600 permissions for the settings file, giving only the owner read and write access. | | `N8N_RESTRICT_FILE_ACCESS_TO` | String | | Limits access to files in these directories. Provide multiple files as a semicolon-separated list ("`;`"). | | `N8N_SECURITY_AUDIT_DAYS_ABANDONED_WORKFLOW` | Number | 90 | Number of days to consider a workflow abandoned if it's not executed. | | `N8N_CONTENT_SECURITY_POLICY` | String | `{}` | Set [Content-Security-Policy](https://developer.mozilla.org/en-US/docs/Web/HTTP/CSP) headers as [helmet.js](https://helmetjs.github.io/#content-security-policy) nested directives object. For example, `{ "frame-ancestors": ["http://localhost:3000"] }` | | `N8N_SECURE_COOKIE` | Boolean | `true` | Ensures that cookies are only sent over HTTPS, enhancing security. | | `N8N_SAMESITE_COOKIE` | Enum string: `strict`, `lax`, `none` | `lax` | Controls cross-site cookie behavior ([learn more](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Set-Cookie/SameSite)): - `strict`: Sent only for first-party requests. - `lax` (default): Sent with top-level navigation requests. - `none`: Sent in all contexts (requires HTTPS). | | `N8N_GIT_NODE_DISABLE_BARE_REPOS` | Boolean | `false` | Set to `true` to prevent the [Git node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.git/) from working with bare repositories, enhancing security. | | `N8N_GIT_NODE_ENABLE_HOOKS` | Boolean | `false` | Set to `true` to allow the [Git node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.git/) to execute Git hooks. | ## Security policy using environment variables Set `N8N_SECURITY_POLICY_MANAGED_BY_ENV` to `true` to manage the security policy from environment variables. See [Manage instance settings using environment variables](../../settings-env-vars/) for how the activation pattern works. | Variable | Type | Default | Description | | --------------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SECURITY_POLICY_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the security policy from environment variables. When `true`, n8n applies the security policy variables on every startup and locks the matching UI controls. | | `N8N_MFA_ENFORCED_ENABLED` | Boolean | `false` | Whether to enforce two-factor authentication for all users (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_PUBLISHING_ENABLED` | Boolean | `true` | Whether users can publish from their personal space (`true`) or not (`false`). | | `N8N_PERSONAL_SPACE_SHARING_ENABLED` | Boolean | `true` | Whether users can share resources from their personal space (`true`) or not (`false`). | # Source control environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. n8n uses Git-based source control to support environments. Refer to [Source control and environments](../../../../source-control-environments/setup/) for more information on how to link a Git repository to an n8n instance and configure your source control. | Variable | Type | Default | Description | | ---------------------------------------- | ------ | --------- | ----------------------------------------------------------------------------------------------------------------------------- | | `N8N_SOURCECONTROL_DEFAULT_SSH_KEY_TYPE` | String | `ed25519` | Set to `rsa` to make RSA the default SSH key type for [Source control setup](../../../../source-control-environments/setup/). | # SSO environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. Feature availability Single sign-on is available on Business and Enterprise plans. Refer to [Set up SSO](../../../securing/set-up-sso/) for in-app setup steps and identity provider guides. See [Manage instance settings using environment variables](../../settings-env-vars/) for how the activation pattern works. ## Activation and shared settings | Variable | Type | Default | Description | | -------------------------------- | ---------------------------------------------------------------------- | ---------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage SSO from environment variables. When `true`, n8n applies the SSO variables on every startup and locks the matching UI controls. | | `N8N_SSO_USER_ROLE_PROVISIONING` | Enum string: `disabled`, `instance_role`, `instance_and_project_roles` | `disabled` | How n8n provisions roles for users who sign in through SSO. `disabled` doesn't provision any roles. `instance_role` provisions the instance-level role only. `instance_and_project_roles` provisions both instance and project roles. | ## OIDC | Variable | Type | Default | Description | | --------------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_OIDC_LOGIN_ENABLED` | Boolean | `false` | Whether to enable OIDC login. | | `N8N_SSO_OIDC_CLIENT_ID` | String | - | OIDC client ID issued by your identity provider. | | `N8N_SSO_OIDC_CLIENT_SECRET` | String | - | OIDC client secret issued by your identity provider. | | `N8N_SSO_OIDC_DISCOVERY_ENDPOINT` | String | - | OIDC discovery endpoint URL (the `.well-known/openid-configuration` URL for your identity provider). | | `N8N_SSO_OIDC_PROMPT` | String | - | Optional OIDC `prompt` parameter to send with the authorization request, for example `login` or `consent`. | | `N8N_SSO_OIDC_ACR_VALUES` | String | - | Optional OIDC `acr_values` parameter. Use this to request a specific authentication context, for example a step-up MFA flow. | ## SAML | Variable | Type | Default | Description | | ---------------------------- | ------- | ------- | ---------------------------------------------------------------------------------------------------------------------- | | `N8N_SSO_SAML_LOGIN_ENABLED` | Boolean | `false` | Whether to enable SAML login. | | `N8N_SSO_SAML_METADATA` | String | - | SAML identity provider metadata as an XML string. Mutually exclusive with `N8N_SSO_SAML_METADATA_URL`; don't set both. | | `N8N_SSO_SAML_METADATA_URL` | String | - | URL to fetch SAML identity provider metadata from. Mutually exclusive with `N8N_SSO_SAML_METADATA`; don't set both. | # SSRF protection environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. These variables control [SSRF protection](../../../securing/ssrf-protection/) for nodes that make HTTP requests to user-controllable targets. | Variable | Type | Default | Description | | ----------------------------- | ------- | -------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_SSRF_PROTECTION_ENABLED` | Boolean | `false` | Whether to enable SSRF protection for nodes making HTTP requests. | | `N8N_SSRF_BLOCKED_IP_RANGES` | String | Standard private/reserved ranges | Comma-separated CIDR ranges to block. Use `default` to include the [standard blocked ranges](../../../securing/ssrf-protection/#default-blocked-ranges), optionally combined with custom ranges (for example: `default,100.0.0.0/8`). | | `N8N_SSRF_ALLOWED_IP_RANGES` | String | - | Comma-separated CIDR ranges to allow. Takes precedence over the blocked ranges. | | `N8N_SSRF_ALLOWED_HOSTNAMES` | String | - | Comma-separated hostname patterns to allow. Supports wildcards (for example: `*.n8n.internal`). Takes precedence over blocked IP ranges. | | `N8N_SSRF_DNS_CACHE_MAX_SIZE` | Number | `1048576` | Maximum DNS cache size in bytes. Uses LRU eviction when the limit is reached. Default is 1 MB. | # Task runner environment variables File-based configuration Unlike the main n8n image, you CANNOT use file-based configuration for secrets in the task runner image. This means that variables with a `_FILE` suffix added will not be recognized. [Task runners](../../task-runners/) execute code defined by the [Code node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.code/). ## n8n instance environment variables | Variable | Type | Default | Description | | ----------------------------------- | ----------------------------------- | --------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_RUNNERS_ENABLED` | Boolean | `false` | Are task runners enabled. | | `N8N_RUNNERS_MODE` | Enum string: `internal`, `external` | `internal` | How to launch and run the task runner. `internal` means n8n will launch a task runner as child process. `external` means an external orchestrator will launch the task runner. | | `N8N_RUNNERS_AUTH_TOKEN` | String | Random string | Shared secret used by a task runner to authenticate to n8n. Required when using `external` mode. | | `N8N_RUNNERS_BROKER_PORT` | Number | `5679` | Port the task broker listens on for task runner connections. | | `N8N_RUNNERS_BROKER_LISTEN_ADDRESS` | String | `127.0.0.1` | Address the task broker listens on. | | `N8N_RUNNERS_MAX_PAYLOAD` | Number | `1 073 741 824` | Maximum payload size in bytes for communication between a task broker and a task runner. | | `N8N_RUNNERS_MAX_OLD_SPACE_SIZE` | String | | The `--max-old-space-size` option to use for a task runner (in MB). By default, Node.js will set this based on available memory. | | `N8N_RUNNERS_MAX_CONCURRENCY` | Number | `5` | The number of concurrent tasks a task runner can execute at a time. | | `N8N_RUNNERS_TASK_TIMEOUT` | Number | `300` | The maximum time, in seconds, a task can run before the runner stops it and restarts. This value must be greater than 0. | | `N8N_RUNNERS_HEARTBEAT_INTERVAL` | Number | `30` | The interval, in seconds, at which the runner must send a heartbeat to the broker. If the runner doesn't send a heartbeat in time, the task stops and the runner restarts. This value must be greater than 0. | | `N8N_RUNNERS_INSECURE_MODE` | Boolean | `false` | Whether to disable all security measures in the task runner, for compatibility with modules that rely on insecure JS features. **Discouraged for production use.** | | `N8N_RUNNERS_TASK_REQUEST_TIMEOUT` | Number | `20` | How long (in seconds) a task request can wait for a runner to become available before timing out. This prevents workflows from hanging indefinitely when no runners are available. Must be greater than 0. | ## Task runner launcher environment variables | Variable | Type | Default | Description | | ---------------------------------------- | --------------------------------------------- | ----------------------- | ---------------------------------------------------------------------------------------- | | `N8N_RUNNERS_LAUNCHER_LOG_LEVEL` | Enum string: `debug`, `info`, `warn`, `error` | `info` | Which log messages to show. | | `N8N_RUNNERS_AUTH_TOKEN` | String | - | Shared secret used to authenticate to n8n. | | `N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT` | Number | `15` | The number of seconds to wait before shutting down an idle runner. | | `N8N_RUNNERS_TASK_BROKER_URI` | String | `http://127.0.0.1:5679` | The URI of the task broker server (n8n instance). | | `N8N_RUNNERS_LAUNCHER_HEALTH_CHECK_PORT` | Number | `5680` | Port for the launcher's health check server. | | `N8N_RUNNERS_MAX_PAYLOAD` | Number | `1 073 741 824` | Maximum payload size in bytes for communication between a task broker and a task runner. | | `N8N_RUNNERS_MAX_CONCURRENCY` | Number | `5` | The number of concurrent tasks a task runner can execute at a time. | ## Task runner environment variables (all languages) | Variable | Type | Default | Description | | ---------------------------------------- | ------ | ----------------------- | ----------------------------------------------------------------------------------------------------------- | | `N8N_RUNNERS_GRANT_TOKEN` | String | Random string | Token the runner uses to authenticate with the task broker. This is automatically provided by the launcher. | | `N8N_RUNNERS_AUTO_SHUTDOWN_TIMEOUT` | Number | `15` | The number of seconds to wait before shutting down an idle runner. | | `N8N_RUNNERS_TASK_BROKER_URI` | String | `http://127.0.0.1:5679` | The URI of the task broker server (n8n instance). | | `N8N_RUNNERS_LAUNCHER_HEALTH_CHECK_PORT` | Number | `5680` | Port for the launcher's health check server. | | `N8N_RUNNERS_MAX_PAYLOAD` | Number | `1 073 741 824` | Maximum payload size in bytes for communication between a task broker and a task runner. | | `N8N_RUNNERS_MAX_CONCURRENCY` | Number | `5` | The number of concurrent tasks a task runner can execute at a time. | ## Task runner environment variables (JavaScript) | Variable | Type | Default | Description | | -------------------------------------- | ------- | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `NODE_FUNCTION_ALLOW_BUILTIN` | String | - | Permit users to import specific built-in modules in the Code node. Use * to allow all. n8n disables importing modules by default. | | `NODE_FUNCTION_ALLOW_EXTERNAL` | String | - | Permit users to import specific external modules (from `n8n/node_modules`) in the Code node. n8n disables importing modules by default. | | `N8N_RUNNERS_ALLOW_PROTOTYPE_MUTATION` | Boolean | `false` | Whether to allow prototype mutation for external libraries. Set to `true` to allow modules that rely on runtime prototype mutation (for example, [`puppeteer`](https://pptr.dev/)) at the cost of relaxing security. | | `GENERIC_TIMEZONE` | \* | `America/New_York` | The [same default timezone as configured for the n8n instance](../timezone-localization/). | | `NODE_OPTIONS` | String | - | [Options](https://nodejs.org/api/cli.html#node_optionsoptions) for Node.js. | | `N8N_RUNNERS_MAX_OLD_SPACE_SIZE` | String | | The `--max-old-space-size` option to use for a task runner (in MB). By default, Node.js will set this based on available memory. | ## Task runner environment variables (Python) | Variable | Type | Default | Description | | ----------------------------- | ------- | ---------------------------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `N8N_RUNNERS_STDLIB_ALLOW` | String | - | Python standard library modules that you can use in the Code node, including their submodules. Use `*` to allow all stdlib modules. n8n disables all Python standard library imports by default. | | `N8N_RUNNERS_EXTERNAL_ALLOW` | String | - | Third-party Python modules that are allowed to be used in the Code node, including their submodules. Use `*` to allow all external modules. n8n disables all third-party Python modules by default. Third-party Python modules must be [included](../../task-runners/#adding-extra-dependencies) in the `n8nio/runners` image. | | `N8N_RUNNERS_BUILTINS_DENY` | String | `eval,exec,compile,open,input,breakpoint,getattr,object,type,vars,setattr,delattr,hasattr,dir,memoryview,__build_class__,globals,locals` | Python built-ins that you can't use in the Code node. Set to an empty string to allow all built-ins. | | `N8N_BLOCK_RUNNER_ENV_ACCESS` | Boolean | `true` | Whether to block access to the runner's environment from within Python code tasks. Set to `false` to enable all Python code node users access to the runner's environment via `os.environ`. For security reasons, environment variable access is blocked by default. | # Timezone and localization environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. | Variable | Type | Default | Description | | -------------------- | ------ | ------------------ | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `GENERIC_TIMEZONE` | \* | `America/New_York` | The n8n instance timezone. Important for schedule nodes (such as Cron). | | `N8N_DEFAULT_LOCALE` | String | `en` | A locale identifier, compatible with the [Accept-Language header](https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Language). n8n doesn't support regional identifiers, such as `de-AT`. When running in a locale other than the default, n8n displays UI strings in the selected locale, and falls back to `en` for any untranslated strings. | # User management SMTP, and two-factor authentication environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. Refer to [User management](../../user-management-self-hosted/) for more information on setting up user management and emails. | Variable | Type | Default | Description | | ----------------------------------------------- | ------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_EMAIL_MODE` | String | `smtp` | Enable emails. | | `N8N_SMTP_HOST` | String | - | *your_SMTP_server_name* | | `N8N_SMTP_PORT` | Number | - | *your_SMTP_server_port* | | `N8N_SMTP_USER` | String | - | *your_SMTP_username* | | `N8N_SMTP_PASS` | String | - | *your_SMTP_password* | | `N8N_SMTP_OAUTH_SERVICE_CLIENT` | String | - | If using 2LO with a service account this is your client ID | | `N8N_SMTP_OAUTH_PRIVATE_KEY` | String | - | If using 2LO with a service account this is your private key | | `N8N_SMTP_SENDER` | String | - | Sender email address. You can optionally include the sender name. Example with name: *n8n ``* | | `N8N_SMTP_SSL` | Boolean | `true` | Whether to use SSL for SMTP (true) or not (false). | | `N8N_SMTP_STARTTLS` | Boolean | `true` | Whether to use STARTTLS for SMTP (true) or not (false). | | `N8N_UM_EMAIL_TEMPLATES_INVITE` | String | - | Full path to your HTML email template. This overrides the default template for invite emails. | | `N8N_UM_EMAIL_TEMPLATES_PWRESET` | String | - | Full path to your HTML email template. This overrides the default template for password reset emails. | | `N8N_UM_EMAIL_TEMPLATES_WORKFLOW_SHARED` | String | - | Overrides the default HTML template for notifying users that a workflow was shared. Provide the full path to the template. | | `N8N_UM_EMAIL_TEMPLATES_CREDENTIALS_SHARED` | String | - | Overrides the default HTML template for notifying users that a credential was shared. Provide the full path to the template. | | `N8N_UM_EMAIL_TEMPLATES_PROJECT_SHARED` | String | - | Overrides the default HTML template for notifying users that a project was shared. Provide the full path to the template. | | `N8N_USER_MANAGEMENT_JWT_SECRET` | String | - | Set a specific JWT secret. By default, n8n generates one on start. | | `N8N_USER_MANAGEMENT_JWT_DURATION_HOURS` | Number | 168 | Set an expiration date for the JWTs in hours. | | `N8N_USER_MANAGEMENT_JWT_REFRESH_TIMEOUT_HOURS` | Number | 0 | How many hours before the JWT expires to automatically refresh it. 0 means 25% of `N8N_USER_MANAGEMENT_JWT_DURATION_HOURS`. -1 means it will never refresh, which forces users to log in again after the period defined in `N8N_USER_MANAGEMENT_JWT_DURATION_HOURS`. | | `N8N_MFA_ENABLED` | Boolean | `true` | Whether to enable two-factor authentication (true) or disable (false). n8n ignores this if existing users have 2FA enabled. | | `N8N_INVITE_LINKS_EMAIL_ONLY` | Boolean | `false` | When set to true, n8n will only deliver invite links via email and will not expose them through the API. This option enhances security by preventing invite URLs from being accessible programmatically, or to high privileged users. | ## Instance owner using environment variables Set `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` to `true` to pre-provision the instance owner from environment variables. See [Manage instance settings using environment variables](../../settings-env-vars/) for how the activation pattern works. | Variable | Type | Default | Description | | ----------------------------------- | ------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_INSTANCE_OWNER_MANAGED_BY_ENV` | Boolean | `false` | Set to `true` to manage the instance owner from environment variables. When `true`, n8n overwrites the instance owner details below on every startup, locks the UI control for that user, and rejects API writes. | | `N8N_INSTANCE_OWNER_EMAIL` | String | - | Email address for the instance owner. | | `N8N_INSTANCE_OWNER_FIRST_NAME` | String | - | First name for the instance owner. | | `N8N_INSTANCE_OWNER_LAST_NAME` | String | - | Last name for the instance owner. | | `N8N_INSTANCE_OWNER_PASSWORD_HASH` | String | - | Bcrypt hash of the instance owner's password. Setting a plaintext password breaks login. | # Workflow history environment variables | Variable | Type | Default | Description | | --------------------------------- | ------ | ------- | ---------------------------------------------------------------------------------------------------------------------------------------- | | `N8N_WORKFLOW_HISTORY_PRUNE_TIME` | Number | `-1` | How long to keep workflow history versions before automatically deleting them (in hours). Set to `-1` to keep all versions indefinitely. | # Workflows environment variables File-based configuration You can add `_FILE` to individual variables to provide their configuration in a separate file. Refer to [Keeping sensitive data in separate files](../../configuration-methods/#keeping-sensitive-data-in-separate-files) for more details. | Variable | Type | Default | Description | | ------------------------------------------- | ------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------ | | `N8N_ONBOARDING_FLOW_DISABLED` | Boolean | `false` | Whether to disable onboarding tips when creating a new workflow (true) or not (false). | | `N8N_WORKFLOW_ACTIVATION_BATCH_SIZE` | Number | `1` | How many workflows to publish simultaneously during startup. | | `N8N_WORKFLOW_CALLER_POLICY_DEFAULT_OPTION` | String | `workflowsFromSameOwner` | Which workflows can call a workflow. Options are: `any`, `none`, `workflowsFromAList`, `workflowsFromSameOwner`. This feature requires [Workflow sharing](../../../../workflows/sharing/). | | `N8N_WORKFLOW_TAGS_DISABLED` | Boolean | `false` | Whether to disable workflow tags (true) or enable tags (false). | | `WORKFLOWS_DEFAULT_NAME` | String | `My workflow` | The default name used for new workflows. | # Docker Installation n8n recommends using [Docker](https://www.docker.com/) for most self-hosting needs. It provides a clean, isolated environment, avoids operating system and tooling incompatibilities, and makes database and environment management simpler. You can also use n8n in Docker with [Docker Compose](../server-setups/docker-compose/). You can find Docker Compose configurations for various architectures in the [n8n-hosting repository](https://github.com/n8n-io/n8n-hosting). Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). You can also follow along with our video guide here: ## Prerequisites Before proceeding, install Docker: - [Docker Desktop](https://docs.docker.com/get-docker/) is available for Mac, Windows, and Linux. Docker Desktop includes the Docker Engine and Docker Compose. - [Docker Engine](https://docs.docker.com/engine/install/) and [Docker Compose](https://docs.docker.com/compose/install/linux/) are also available as separate packages for Linux. Use this for Linux machines without a graphical environment or when you don't want the Docker Desktop UI. Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Starting n8n From your terminal, run the following commands, replacing the `` placeholders with [your timezone](https://en.wikipedia.org/wiki/List_of_tz_database_time_zones#List): ``` docker volume create n8n_data docker run -it --rm \ --name n8n \ -p 5678:5678 \ -e GENERIC_TIMEZONE="" \ -e TZ="" \ -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true \ -e N8N_RUNNERS_ENABLED=true \ -v n8n_data:/home/node/.n8n \ docker.n8n.io/n8nio/n8n ``` This command creates a volume to store persistent data, downloads the required n8n image, and starts the container with the following settings: - Maps and exposes port `5678` on the host. - Sets the timezone for the container: - the `TZ` environment variable sets the system timezone to control what scripts and commands like `date` return. - the [`GENERIC_TIMEZONE` environment variable](../../configuration/environment-variables/timezone-localization/) sets the correct timezone for schedule-oriented nodes like the [Schedule Trigger node](../../../integrations/builtin/core-nodes/n8n-nodes-base.scheduletrigger/). - Enforces secure file permissions for the n8n configuration file. - Enables [task runners](../../configuration/task-runners/), the recommended way of executing tasks in n8n. - Mounts the `n8n_data` volume to the `/home/node/.n8n` directory to persist your data across container restarts. Once running, you can access n8n by opening: ## Using with PostgreSQL By default, n8n uses SQLite to save [credentials](../../../glossary/#credential-n8n), past executions, and workflows. n8n also supports PostgreSQL, configurable using environment variables as detailed below. Persisting the `.n8n` directory still recommended When using PostgreSQL, n8n doesn't need to use the `.n8n` directory for the SQLite database file. However, the directory still contains other important data like encryption keys, instance logs, and source control feature assets. While you can work around some of these requirements, (for example, by setting the [`N8N_ENCRYPTION_KEY` environment variable](../../configuration/environment-variables/deployment/)), it's best to continue mapping a persistent volume for the directory to avoid potential issues. To use n8n with PostgreSQL, execute the following commands, replacing the placeholders (depicted within angled brackets, for example ``) with your actual values: ``` docker volume create n8n_data docker run -it --rm \ --name n8n \ -p 5678:5678 \ -e GENERIC_TIMEZONE="" \ -e TZ="" \ -e N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true \ -e N8N_RUNNERS_ENABLED=true \ -e DB_TYPE=postgresdb \ -e DB_POSTGRESDB_DATABASE= \ -e DB_POSTGRESDB_HOST= \ -e DB_POSTGRESDB_PORT= \ -e DB_POSTGRESDB_USER= \ -e DB_POSTGRESDB_SCHEMA= \ -e DB_POSTGRESDB_PASSWORD= \ -v n8n_data:/home/node/.n8n \ docker.n8n.io/n8nio/n8n ``` You can find a complete `docker-compose` file for PostgreSQL in the [n8n hosting repository](https://github.com/n8n-io/n8n-hosting/tree/main/docker-compose/withPostgres). ## Updating To update n8n, in Docker Desktop, navigate to the **Images** tab and select **Pull** from the context menu to download the latest n8n image: You can also use the command line to pull the latest, or a specific version: ``` # Pull latest (stable) version docker pull docker.n8n.io/n8nio/n8n # Pull specific version docker pull docker.n8n.io/n8nio/n8n:1.81.0 # Pull next (unstable) version docker pull docker.n8n.io/n8nio/n8n:next ``` After pulling the updated image, stop your n8n container and start it again. You can also use the command line. Replace `` in the commands below with the container ID you find in the first command: ``` # Find your container ID docker ps -a # Stop the container with the `` docker stop # Remove the container with the `` docker rm # Start the container docker run --name= [options] -d docker.n8n.io/n8nio/n8n ``` ### Updating Docker Compose If you run n8n using a Docker Compose file, follow these steps to update n8n: ``` # Navigate to the directory containing your docker compose file cd # Pull latest version docker compose pull # Stop and remove older version docker compose down # Start the container docker compose up -d ``` ## n8n with tunnel Danger Use this for local development and testing. It isn't safe to use it in production. Development tooling The tunnel feature is a convenience tool for local development. The underlying implementation may change between n8n versions. To use webhooks for trigger nodes of external services like GitHub, n8n has to be reachable from the web. n8n provides a tunnel service using [cloudflared](https://github.com/cloudflare/cloudflared) that redirects requests from the web to your local n8n instance. Docker must be installed for the tunnel to work. There are two ways to use the tunnel, depending on how you run n8n: ### Full stack This runs n8n and cloudflared together in containers. The tunnel URL prints on startup and everything is wired automatically: ``` pnpm stack --tunnel ``` ### Services only If you prefer to run n8n locally with `pnpm dev` or `pnpm start`, you can start cloudflared as a standalone service: ``` # Terminal 1: Start the cloudflared tunnel service pnpm --filter n8n-containers services --services cloudflared # Terminal 2: Start n8n locally pnpm dev ``` The `services` command: 1. Starts cloudflared pointing at `host.docker.internal:5678` (your local n8n). 1. Fetches the public tunnel URL from cloudflared's metrics endpoint. 1. Writes a `.env` file to `packages/cli/bin/.env` with `WEBHOOK_URL` and `N8N_PROXY_HOPS=1`. 1. `pnpm dev` and `pnpm start` pick up that `.env` automatically via dotenv. Clean up when done: ``` pnpm --filter n8n-containers services:clean ``` ## Next steps - Find more information about Docker setup in the README file for the [Docker image](https://github.com/n8n-io/n8n/tree/master/docker/images/n8n). - Learn more about [configuring](../../configuration/environment-variables/) and [scaling](../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../try-it-out/). # npm npm is a quick way to get started with n8n on your local machine. You must have [Node.js](https://nodejs.org/en/) installed. n8n requires a Node.js version between 20.19 and 24.x, inclusive. Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Try n8n with npx You can try n8n without installing it using npx. From the terminal, run: ``` npx n8n ``` This command will download everything that's needed to start n8n. You can then access n8n and start building workflows by opening . ## Install globally with npm To install n8n globally, use npm: ``` npm install n8n -g ``` To install or update to a specific version of n8n use the `@` syntax to specify the version. For example: ``` npm install -g n8n@0.126.1 ``` To install `next`: ``` npm install -g n8n@next ``` After the installation, start n8n by running: ``` n8n # or n8n start ``` ### Next steps Try out n8n using the [Quickstarts](../../../try-it-out/). ## Updating To update your n8n instance to the `latest` version, run: ``` npm update -g n8n ``` To install the `next` version: ``` npm install -g n8n@next ``` ## n8n with tunnel Danger Use this for local development and testing. It isn't safe to use it in production. Development tooling The tunnel feature is a convenience tool for local development. The underlying implementation may change between n8n versions. To use webhooks for trigger nodes of external services like GitHub, n8n has to be reachable from the web. n8n provides a tunnel service using [cloudflared](https://github.com/cloudflare/cloudflared) that redirects requests from the web to your local n8n instance. Docker must be installed for the tunnel to work. There are two ways to use the tunnel, depending on how you run n8n: Docker required The tunnel uses cloudflared, which runs as a Docker container. Make sure [Docker](https://docs.docker.com/get-docker/) is installed on your machine, even when running n8n via npm. For npm installations, use the **services only** approach. Start cloudflared as a standalone service, then run n8n locally: ``` # Terminal 1: Start the cloudflared tunnel service pnpm --filter n8n-containers services --services cloudflared # Terminal 2: Start n8n locally pnpm dev ``` The `services` command starts cloudflared, fetches the public tunnel URL, and writes a `.env` file to `packages/cli/bin/.env` with `WEBHOOK_URL` and `N8N_PROXY_HOPS=1`. n8n picks up this `.env` automatically on startup. Clean up when done: ``` pnpm --filter n8n-containers services:clean ``` For the full stack approach (n8n and cloudflared both in containers), refer to the [Docker tunnel setup](../docker/#n8n-with-tunnel). ## Reverting an upgrade Install the older version that you want to go back to. If the upgrade involved a database migration: 1. Check the feature documentation and release notes to see if there are any manual changes you need to make. 1. Run `n8n db:revert` on your current version to roll back the database. If you want to revert more than one database migration, you need to repeat this process. ## Windows troubleshooting If you are experiencing issues running n8n on Windows, make sure your Node.js environment is correctly set up. Follow Microsoft's guide to [Install NodeJS on Windows](https://learn.microsoft.com/en-us/windows/dev-environment/javascript/nodejs-on-windows). # Update self-hosted n8n It's important to keep your n8n version up to date. This ensures you get the latest features and fixes. Some tips when updating: - Update frequently: this avoids having to jump multiple versions at once, reducing the risk of a disruptive update. Try to update at least once a month. - Check the [Release notes](../../../release-notes/) for breaking changes. - Use [Environments](../../../source-control-environments/) to create a test version of your instance. Test the update there first. For instructions on how to update, refer to the documentation for your installation method: - [Installed with npm](../npm/#updating) - [Installed with Docker](../docker/#updating) # Server setups Self-host with Docker Compose: - [Digital Ocean](digital-ocean/) - [Heroku](heroku/) - [Hetzner Cloud](hetzner/) Self-host with Google Cloud Run (with access to n8n workflow tools for Google Workspace, e.g. Gmail, Drive): - [Google Cloud Run](google-cloud-run/) Starting points for a Kubernetes setup: - [AWS](aws/) - [Azure](azure/) - [Google Kubernetes Engine (GKE)](google-kubernetes-engine/) Configuration guides to help you get started on other platforms: - [Docker Compose](docker-compose/) # Hosting n8n on Amazon Web Services This hosting guide shows you how to self-host n8n with Amazon Web Services (AWS). It uses n8n with Postgres as a database backend using Kubernetes to manage the necessary resources and reverse proxy. ## Hosting options AWS offers several ways suitable for hosting n8n, including EC2 (virtual machines), and EKS (containers running with Kubernetes). This guide uses [EKS](https://aws.amazon.com/eks/) as the hosting option. Using Kubernetes requires some additional complexity and configuration, but is the best method for scaling n8n as demand changes. ## Prerequisites The steps in this guide use a mix of the AWS UI and [the eksctl CLI tool for EKS](https://eksctl.io). While not mentioned in the documentation for eksctl, you also need to [install the AWS CLI tool](https://docs.aws.amazon.com/cli/latest/userguide/getting-started-install.html), and [configure authentication of the tool](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-quickstart.html). Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Create a cluster Use the eksctl tool to create a cluster specifying a name and a region with the following command: ``` eksctl create cluster --name n8n --region ``` This can take a while to create the cluster. Once the cluster is created, eksctl automatically sets the kubectl context to the cluster. ## Clone configuration repository Kubernetes and n8n require a series of configuration files. You can clone these from [this repository](https://github.com/n8n-io/n8n-hosting). The following steps tell you what each file does, and what settings you need to change. Clone the repository with the following command: ``` git clone https://github.com/n8n-io/n8n-hosting.git ``` And change directory: ``` cd n8n-hosting/kubernetes ``` ## Configure Postgres For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite. ### Configure volume for persistent storage To maintain data between pod restarts, the Postgres deployment needs a persistent volume. The default AWS storage class, [gp3](https://docs.aws.amazon.com/ebs/latest/userguide/general-purpose.html#gp3-ebs-volume-type), is suitable for this purpose. This is defined in the `postgres-claim0-persistentvolumeclaim.yaml` manifest. ``` … spec: storageClassName: gp3 accessModes: - ReadWriteOnce … ``` ### Postgres environment variables Postgres needs some environment variables set to pass to the application running in the containers. The example `postgres-secret.yaml` file contains placeholders you need to replace with values of your own for user details and the database to use. PostgreSQL uses a root user (`POSTGRES_USER`) for setup and administration, but it’s best practice to create a separate non-root user (`POSTGRES_NON_ROOT_USER`) for n8n. The root user has full control, while n8n only needs the non-root user permissions to run. Configuring both improves security and helps prevent accidental changes to the database system. The `postgres-deployment.yaml` manifest then uses the values from this manifest file to send to the application pods. ## Configure n8n ### Create a volume for file storage While not essential for running n8n, using persistent volumes helps maintain files uploaded while using n8n and if you want to persist [manual n8n encryption keys](../../../configuration/environment-variables/deployment/) between restarts, which saves a file containing the key into file storage during startup. The `n8n-claim0-persistentvolumeclaim.yaml` manifest creates this, and the n8n Deployment mounts that claim in the `volumes` section of the `n8n-deployment.yaml` manifest. ``` … volumes: - name: n8n-claim0 persistentVolumeClaim: claimName: n8n-claim0 … ``` ### Pod resources [Kubernetes](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) lets you specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the `resources` section of the `n8n-deployment.yaml` file: ``` … resources: requests: memory: "250Mi" limits: memory: "500Mi" … ``` This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings: - **Start**: 320mb RAM, 10 millicore CPU burstable - **Pro (10k executions)**: 640mb RAM, 20 millicore CPU burstable - **Pro (50k executions)**: 1280mb RAM, 80 millicore CPU burstable ### Optional: Environment variables You can configure n8n settings and behaviors using environment variables. Create an `n8n-secret.yaml` file. Refer to [Environment variables](../../../configuration/environment-variables/) for n8n environment variables details. ## Deployments The two deployment manifests (`n8n-deployment.yaml` and `postgres-deployment.yaml`) define the n8n and Postgres applications to Kubernetes. The manifests define the following: - Send the environment variables defined to each application pod - Define the container image to use - Set resource consumption limits - The `volumes` defined earlier and `volumeMounts` to define the path in the container to mount volumes. - Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs. ## Services The two service manifests (`postgres-service.yaml` and `n8n-service.yaml`) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively by default. ## Send to Kubernetes cluster Send all the manifests to the cluster by running the following command in the `n8n-kubernetes-hosting` directory: ``` kubectl apply -f . ``` Namespace error You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command: ``` kubectl apply -f namespace.yaml ``` ## Set up DNS n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to a static address of the instance. To find the address of the n8n service running on the instance: 1. Open the **Clusters** section of the **Amazon Elastic Kubernetes Service** page in the AWS console. 1. Select the name of the cluster to open its configuration page. 1. Select the **Resources** tab, then **Service and networking** > **Services**. 1. Select the **n8n** service and copy the **Load balancer URLs** value. Use this value suffixed with the n8n service port (5678) for DNS. Use HTTP This guide uses HTTP connections for the services it defines, for example in `n8n-deployment.yaml`. However, if you click the **Load balancer URLs** value, EKS takes you to an "HTTPS" URL which results in an error. To solve this, when you open the n8n subdomain, make sure to use HTTP. ## Delete resources If you need to delete the setup, you can remove the resources created by the manifests with the following command: ``` kubectl delete -f . ``` ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on Azure This hosting guide shows you how to self-host n8n on Azure. It uses n8n with Postgres as a database backend using Kubernetes to manage the necessary resources and reverse proxy. ## Prerequisites You need [The Azure command line tool](https://learn.microsoft.com/en-us/cli/azure/install-azure-cli) Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Hosting options Azure offers several ways suitable for hosting n8n, including Azure Container Instances (optimized for running containers), Linux Virtual Machines, and Azure Kubernetes Service (containers running with Kubernetes). This guide uses the Azure Kubernetes Service (AKS) as the hosting option. Using Kubernetes requires some additional complexity and configuration, but is the best method for scaling n8n as demand changes. The steps in this guide use a mix of the Azure UI and command line tool, but you can use either to accomplish most tasks. ## Open the Azure Kubernetes Service From [the Azure portal](https://portal.azure.com/) select **Kubernetes services**. ## Create a cluster From the Kubernetes services page, select **Create** > **Create a Kubernetes cluster**. You can select any of the configuration options that suit your needs, then select **Create** when done. ## Set Kubectl context The remainder of the steps in this guide require you to set the Azure instance as the Kubectl context. You can find the connection details for a cluster instance by opening its details page and then the **Connect** button. The resulting code snippets shows the steps to paste and run into a terminal to change your local Kubernetes settings to use the new cluster. ## Clone configuration repository Kubernetes and n8n require a series of configuration files. You can clone these from [this repository](https://github.com/n8n-io/n8n-hosting). The following steps tell you which file configures what and what you need to change. Clone the repository with the following command: ``` git clone https://github.com/n8n-io/n8n-hosting.git ``` And change directory: ``` cd n8n-hosting/kubernetes ``` ## Configure Postgres For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite. ### Configure volume for persistent storage To maintain data between pod restarts, the Postgres deployment needs a persistent volume. The default storage class is suitable for this purpose and is defined in the `postgres-claim0-persistentvolumeclaim.yaml` manifest. Specialized storage classes If you have specialised or higher requirements for storage classes, [read more on the options Azure offers in the documentation](https://learn.microsoft.com/en-us/azure/aks/concepts-storage#storage-classes). ### Postgres environment variables Postgres needs some environment variables set to pass to the application running in the containers. The example `postgres-secret.yaml` file contains placeholders you need to replace with your own values. Postgres will use these details when creating the database.. The `postgres-deployment.yaml` manifest then uses the values from this manifest file to send to the application pods. ## Configure n8n ### Create a volume for file storage While not essential for running n8n, using persistent volumes is required for: - Using nodes that interact with files, such as the binary data node. - If you want to persist [manual n8n encryption keys](../../../configuration/environment-variables/deployment/) between restarts. This saves a file containing the key into file storage during startup. The `n8n-claim0-persistentvolumeclaim.yaml` manifest creates this, and the n8n Deployment mounts that claim in the `volumes` section of the `n8n-deployment.yaml` manifest. ``` … volumes: - name: n8n-claim0 persistentVolumeClaim: claimName: n8n-claim0 … ``` ### Pod resources [Kubernetes lets you](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) optionally specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the `resources` section of the `n8n-deployment.yaml` file: ``` … resources: requests: memory: "250Mi" limits: memory: "500Mi" … ``` This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings: - **Start**: 320mb RAM, 10 millicore CPU burstable - **Pro (10k executions)**: 640mb RAM, 20 millicore CPU burstable - **Pro (50k executions)**: 1280mb RAM, 80 millicore CPU burstable ### Optional: Environment variables You can configure n8n settings and behaviors using environment variables. Create an `n8n-secret.yaml` file. Refer to [Environment variables](../../../configuration/environment-variables/) for n8n environment variables details. ## Deployments The two deployment manifests (`n8n-deployment.yaml` and `postgres-deployment.yaml`) define the n8n and Postgres applications to Kubernetes. The manifests define the following: - Send the environment variables defined to each application pod - Define the container image to use - Set resource consumption limits with the `resources` object - The `volumes` defined earlier and `volumeMounts` to define the path in the container to mount volumes. - Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs. ## Services The two service manifests (`postgres-service.yaml` and `n8n-service.yaml`) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively. ## Send to Kubernetes cluster Send all the manifests to the cluster with the following command: ``` kubectl apply -f . ``` Namespace error You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command: ``` kubectl apply -f namespace.yaml ``` ## Set up DNS n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the n8n service. Find the IP address of the n8n service from the **Services & ingresses** menu item of the cluster you want to use under the **External IP** column. You need to add the n8n port, "5678" to the URL. Static IP addresses with AKS [Read this tutorial](https://learn.microsoft.com/en-us/azure/aks/static-ip) for more details on how to use a static IP address with AKS. ## Delete resources Remove the resources created by the manifests with the following command: ``` kubectl delete -f . ``` ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on DigitalOcean This hosting guide shows you how to self-host n8n on a DigitalOcean droplet. It uses: - [Caddy](https://caddyserver.com) (a reverse proxy) to allow access to the Droplet from the internet. Caddy will also automatically create and manage SSL / TLS certificates for your n8n instance. - [Docker Compose](https://docs.docker.com/compose/) to create and define the application components and how they work together. Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Create a Droplet 1. [Log in](https://cloud.digitalocean.com/login) to DigitalOcean. 1. Select the project to host the Droplet, or [create a new project](https://docs.digitalocean.com/products/projects/how-to/create/). 1. In your project, select **Droplets** from the **Manage** menu. 1. [Create a new Droplet](https://docs.digitalocean.com/products/droplets/how-to/create/) using the [Docker image](https://marketplace.digitalocean.com/apps/docker) available on the **Marketplace** tab. Droplet resources When creating the Droplet, DigitalOcean asks you to choose a plan. For most usage levels, a basic shared CPU plan is enough. SSH key or Password DigitalOcean lets you choose between SSH key and password-based authentication. SSH keys are considered more secure. ## Log in to your Droplet and create new user The rest of this guide requires you to log in to the Droplet using a terminal with SSH. Refer to [How to Connect to Droplets with SSH](https://docs.digitalocean.com/products/droplets/how-to/connect-with-ssh/) for more information. You should create a new user, to avoid working as the root user: 1. Log in as root. 1. Create a new user: ``` adduser ``` 1. Follow the prompts in the CLI to finish creating the user. 1. Grant the new user administrative privileges: ``` usermod -aG sudo ``` You can now run commands with superuser privileges by using `sudo` before the command. 1. Follow the steps to set up SSH for the new user: [Add Public Key Authentication](https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-14-04#step-four-add-public-key-authentication-recommended). 1. Log out of the droplet. 1. Log in using SSH as the new user. ## Clone configuration repository Docker Compose, n8n, and Caddy require a series of folders and configuration files. You can clone these from [this repository](https://github.com/n8n-io/n8n-docker-caddy) into the home folder of the logged-in user on your Droplet. The following steps will tell you which file to change and what changes to make. Clone the repository with the following command: ``` git clone https://github.com/n8n-io/n8n-docker-caddy.git ``` And change directory to the root of the repository you cloned: ``` cd n8n-docker-caddy ``` ## Default folders and files The host operating system (the DigitalOcean Droplet) copies the two folders you created to Docker containers to make them available to Docker. The two folders are: - `caddy_config`: Holds the Caddy configuration files. - `local_files`: A folder for files you upload or add using n8n. ### Create Docker volumes To persist the Caddy cache between restarts and speed up start times, create [a Docker volume](https://docs.docker.com/storage/volumes/) that Docker reuses between restarts: ``` sudo docker volume create caddy_data ``` Create a Docker volume for the n8n data: ``` sudo docker volume create n8n_data ``` ## Set up DNS n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the Droplet. The exact steps for this depend on your DNS provider, but typically you need to create a new "A" record for the n8n subdomain. DigitalOcean provide [An Introduction to DNS Terminology, Components, and Concepts](https://www.digitalocean.com/community/tutorials/an-introduction-to-dns-terminology-components-and-concepts). ## Open ports n8n runs as a web application, so the Droplet needs to allow incoming access to traffic on port 80 for non-secure traffic, and port 443 for secure traffic. Open the following ports in the Droplet's firewall by running the following two commands: ``` sudo ufw allow 80 sudo ufw allow 443 ``` ## Configure n8n n8n needs some environment variables set to pass to the application running in the Docker container. The example `.env` file contains placeholders you need to replace with values of your own. Open the file with the following command: ``` nano .env ``` The file contains inline comments to help you know what to change. Refer to [Environment variables](../../../configuration/environment-variables/) for n8n environment variables details. ## The Docker Compose file The Docker Compose file (`docker-compose.yml`) defines the services the application needs, in this case Caddy and n8n. - The Caddy service definition defines the ports it uses and the local volumes to copy to the containers. - The n8n service definition defines the ports it uses, the environment variables n8n needs to run (some defined in the `.env` file), and the volumes it needs to copy to the containers. The Docker Compose file uses the environment variables set in the `.env` file, so you shouldn't need to change it's content, but to take a look, run the following command: ``` nano docker-compose.yml ``` ## Configure Caddy Caddy needs to know which domains it should serve, and which port to expose to the outside world. Edit the `Caddyfile` file in the `caddy_config` folder. ``` nano caddy_config/Caddyfile ``` Change the placeholder domain to yours. If you followed the steps to name the subdomain n8n, your full domain is similar to `n8n.example.com`. The `n8n` in the `reverse_proxy` setting tells Caddy to use the service definition defined in the `docker-compose.yml` file: ``` n8n.. { reverse_proxy n8n:5678 { flush_interval -1 } } ``` If you were to use `automate.example.com`, your `Caddyfile` may look something like: ``` automate.example.com { reverse_proxy n8n:5678 { flush_interval -1 } } ``` ## Start Docker Compose Start n8n and Caddy with the following command: ``` sudo docker compose up -d ``` This may take a few minutes. ## Test your setup In your browser, open the URL formed of the subdomain and domain name defined earlier. Enter the user name and password defined earlier, and you should be able to access n8n. ## Stop n8n and Caddy You can stop n8n and Caddy with the following command: ``` sudo docker compose stop ``` ## Updating If you run n8n using a Docker Compose file, follow these steps to update n8n: ``` # Navigate to the directory containing your docker compose file cd # Pull latest version docker compose pull # Stop and remove older version docker compose down # Start the container docker compose up -d ``` ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Docker-Compose These instructions cover how to run n8n on a Linux server using Docker Compose. If you have already installed Docker and Docker-Compose, then you can start with [step 3](#3-dns-setup). You can find Docker Compose configurations for various architectures in the [n8n-hosting repository](https://github.com/n8n-io/n8n-hosting). Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## 1. Install Docker and Docker Compose The way that you install Docker and Docker Compose depends on your Linux distribution. You can find specific instructions for each component in the links below: - [Docker Engine](https://docs.docker.com/engine/install/) - [Docker Compose](https://docs.docker.com/compose/install/linux/) After following the installation instructions, verify that Docker and Docker Compose are available by typing: ``` docker --version docker compose version ``` ## 2. Optional: Non-root user access You can optionally grant access to run Docker without the `sudo` command. To grant access to the user that you're currently logged in with (assuming they have `sudo` access), run: ``` sudo usermod -aG docker ${USER} # Register the `docker` group membership with current session without changing your primary group exec sg docker newgrp ``` To grant access to a different user, type the following, substituting `` with the appropriate username: ``` sudo usermod -aG docker ``` You will need to run `exec sg docker newgrp` from any of that user's existing sessions for it to access the new group permissions. You can verify that your current session recognizes the `docker` group by typing: ``` groups ``` ## 3. DNS setup To host n8n online or on a network, create a dedicated subdomain pointed at your server. Add an A record to route the subdomain accordingly: | Record type | Name | Destination | | ----------- | --------------------------------- | -------------------------- | | A | `n8n` (or your desired subdomain) | `` | ## 4. Create an `.env` file Create a project directory to store your n8n environment configuration and Docker Compose files and navigate inside: ``` mkdir n8n-compose cd n8n-compose ``` Inside the `n8n-compose` directory, create an `.env` file to customize your n8n instance's details. Change it to match your own information: ``` # DOMAIN_NAME and SUBDOMAIN together determine where n8n will be reachable from # The top level domain to serve from DOMAIN_NAME=example.com # The subdomain to serve from SUBDOMAIN=n8n # The above example serve n8n at: https://n8n.example.com # Optional timezone to set which gets used by Cron and other scheduling nodes # New York is the default value if not set GENERIC_TIMEZONE=Europe/Berlin # The email address to use for the TLS/SSL certificate creation SSL_EMAIL=user@example.com ``` ## 5. Create local files directory Inside your project directory, create a directory called `local-files` for sharing files between the n8n instance and the host system (for example, using the [Read/Write Files from Disk node](../../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/)): ``` mkdir local-files ``` The Docker Compose file below can automatically create this directory, but doing it manually ensures that it's created with the right ownership and permissions. ## 6. Create Docker Compose file Create a `compose.yaml` file. Paste the following in the file: ``` services: traefik: image: "traefik" restart: always command: - "--api.insecure=true" - "--providers.docker=true" - "--providers.docker.exposedbydefault=false" - "--entrypoints.web.address=:80" - "--entrypoints.web.http.redirections.entryPoint.to=websecure" - "--entrypoints.web.http.redirections.entrypoint.scheme=https" - "--entrypoints.websecure.address=:443" - "--certificatesresolvers.mytlschallenge.acme.tlschallenge=true" - "--certificatesresolvers.mytlschallenge.acme.email=${SSL_EMAIL}" - "--certificatesresolvers.mytlschallenge.acme.storage=/letsencrypt/acme.json" ports: - "80:80" - "443:443" volumes: - traefik_data:/letsencrypt - /var/run/docker.sock:/var/run/docker.sock:ro n8n: image: docker.n8n.io/n8nio/n8n restart: always ports: - "127.0.0.1:5678:5678" labels: - traefik.enable=true - traefik.http.routers.n8n.rule=Host(`${SUBDOMAIN}.${DOMAIN_NAME}`) - traefik.http.routers.n8n.tls=true - traefik.http.routers.n8n.entrypoints=web,websecure - traefik.http.routers.n8n.tls.certresolver=mytlschallenge - traefik.http.middlewares.n8n.headers.SSLRedirect=true - traefik.http.middlewares.n8n.headers.STSSeconds=315360000 - traefik.http.middlewares.n8n.headers.browserXSSFilter=true - traefik.http.middlewares.n8n.headers.contentTypeNosniff=true - traefik.http.middlewares.n8n.headers.forceSTSHeader=true - traefik.http.middlewares.n8n.headers.SSLHost=${DOMAIN_NAME} - traefik.http.middlewares.n8n.headers.STSIncludeSubdomains=true - traefik.http.middlewares.n8n.headers.STSPreload=true - traefik.http.routers.n8n.middlewares=n8n@docker environment: - N8N_ENFORCE_SETTINGS_FILE_PERMISSIONS=true - N8N_HOST=${SUBDOMAIN}.${DOMAIN_NAME} - N8N_PORT=5678 - N8N_PROTOCOL=https - NODE_ENV=production - WEBHOOK_URL=https://${SUBDOMAIN}.${DOMAIN_NAME}/ - GENERIC_TIMEZONE=${GENERIC_TIMEZONE} - TZ=${GENERIC_TIMEZONE} volumes: - n8n_data:/home/node/.n8n - ./local-files:/files volumes: n8n_data: traefik_data: ``` The Docker Compose file above configures two containers: one for n8n, and one to run [traefik](https://github.com/traefik/traefik), an application proxy to manage TLS/SSL certificates and handle routing. It also creates and mounts two [Docker Volumes](https://docs.docker.com/engine/storage/volumes/) and mounts the `local-files` directory you created earlier: | Name | Type | Container mount | Description | | --------------- | ----------------------------------------------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------- | | `n8n_data` | [Volume](https://docs.docker.com/engine/storage/volumes/) | `/home/node/.n8n` | Where n8n saves its SQLite database file and encryption key. | | `traefik_data` | [Volume](https://docs.docker.com/engine/storage/volumes/) | `/letsencrypt` | Where traefik saves TLS/SSL certificate data. | | `./local-files` | [Bind](https://docs.docker.com/engine/storage/bind-mounts/) | `/files` | A local directory shared between the n8n instance and host. In n8n, use the `/files` path to read from and write to this directory. | ## 7. Start Docker Compose Start n8n by typing: ``` sudo docker compose up -d ``` To stop the containers, type: ``` sudo docker compose stop ``` ## 8. Done You can now reach n8n using the subdomain + domain combination you defined in your `.env` file configuration. The above example would result in `https://n8n.example.com`. n8n is only accessible using secure HTTPS, not over plain HTTP. If you have trouble reaching your instance, check your server's firewall settings and your DNS configuration. ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on Google Cloud Run This hosting guide shows you how to self-host n8n on Google Cloud Run, a serverless container runtime. If you're just getting started with n8n and don't need a production-grade deployment, you can go with the "easy mode" option below for deployment. Otherwise, if you intend to use this n8n deployment at-scale, refer to the "durable mode" instructions further down. You can also enable access via OAuth to Google Workspace, such as Gmail and Drive, to use these services as n8n workflow tools. Instructions for granting n8n access to these services are at the end of this documentation. If you want to deploy to Google Kubernetes Engine (GKE) instead, you can refer to [these instructions](../google-kubernetes-engine/). Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Before you begin: get a Google Cloud project If you have not yet created a Google Cloud project, [do this first](https://developers.google.com/workspace/guides/create-project) (and ensure you have billing enabled on the project; even if your Cloud Run service runs for free you must have billing activated to deploy). Otherwise, navigate to the project where you want to deploy n8n. ## Easy mode This is the fastest way to deploy n8n on Cloud Run. For this deployment, n8n's data is in-memory so this is only recommended for demo purposes. **Anytime this Cloud Run service scales to zero or is redeployed, the n8n data will be lost.** Refer to the durable mode instructions below if you need a production-grade deployment. If you have not yet created a Google Cloud project, [do this first](https://developers.google.com/workspace/guides/create-project) (and ensure you have billing enabled on the project; even if your Cloud Run service will run for free you must have billing enabled to activated to deploy). Otherwise, navigate to the project where you want to deploy n8n. Open the Cloud Shell Terminal (on the Google Cloud console, either type "G" then "S" or click on the terminal icon on the upper right). Once your session is open, you may need to run this command first to login (and follow the steps it asks you to complete): ``` gcloud auth login ``` You can also explicitly enable the Cloud Run API (even if you don't do this, it will ask if you want this enabled when you deploy): ``` gcloud services enable run.googleapis.com ``` Required: Custom health check endpoint Google Cloud Run reserves `/healthz` for its own health checks. Since n8n uses this path by default, it can conflict and cause connection issues in the workflow canvas. To fix this, set the `N8N_ENDPOINT_HEALTH` environment variable to a custom path (included in the deployment commands below). To deploy n8n: ``` gcloud run deploy n8n \ --image=n8nio/n8n \ --region=us-west1 \ --allow-unauthenticated \ --port=5678 \ --no-cpu-throttling \ --memory=2Gi \ --set-env-vars="N8N_ENDPOINT_HEALTH=health" ``` (you can specify whichever region you prefer, instead of "us-west1") Once the deployment finishes, open another tab to navigate to the Service URL. n8n may still be loading and you will see a "n8n is starting up. Please wait" message, but shortly thereafter you should see the n8n login screen. Optional: If you want to keep this n8n service running for as long as possible to avoid data loss, you can also set manual scale to 1 to prevent it from autoscaling to 0. ``` gcloud run deploy n8n \ --image=n8nio/n8n \ --region=us-west1 \ --allow-unauthenticated \ --port=5678 \ --no-cpu-throttling \ --memory=2Gi \ --scaling=1 \ --set-env-vars="N8N_ENDPOINT_HEALTH=health" ``` This does not prevent data loss completely, such as whenever the Cloud Run service is re-deployed/updated. If you want truly persistent data, you should refer to the instructions below for how to attach a database. ## Durable mode The following instructions are intended for a more durable, production-grade deployment of n8n on Cloud Run. It includes resources such as a database for persistance and secret manager for sensitive data. If you want to deploy the following setup via Terraform, refer to this [example](https://github.com/ryanpei/n8n-hosting/tree/main/google-cloud-run) which deploys the same setup as the following (without the OAuth setup for Google Workspace tools). ## Enable APIs and set env vars Open the Cloud Shell Terminal (on the Google Cloud console, either type "G" then "S" or click on the terminal icon on the upper right) and run these commands in the terminal session: ``` ## You may need to login first gcloud auth login gcloud services enable run.googleapis.com gcloud services enable sqladmin.googleapis.com gcloud services enable secretmanager.googleapis.com ``` You'll also want to set some environment variables for the remainder of these instructions: ``` export PROJECT_ID=your-project export REGION=region-where-you-want-this-deployed ``` ## Setup your Postgres database Run this command to create the Postgres DB instance (it will take a few minutes to complete; also ensure you update the root-password field with your own desired password): ``` gcloud sql instances create n8n-db \ --database-version=POSTGRES_13 \ --tier=db-f1-micro \ --region=$REGION \ --root-password="change-this-password" \ --storage-size=10GB \ --availability-type=ZONAL \ --no-backup \ --storage-type=HDD ``` Once complete, you can add the database that n8n will use: ``` gcloud sql databases create n8n --instance=n8n-db ``` Create the DB user for n8n (change the password value, of course): ``` gcloud sql users create n8n-user \ --instance=n8n-db \ --password="change-this-password" ``` You can save the password you set for this n8n-user to a file for the next step of saving the password in Secret Manager. Be sure to delete this file later. ## Store sensitive data in Secret Manager While not required, it's absolutely recommended to store your sensitive data in Secrets Manager. Create a secret for the database password (replace "/your/password/file" with the file you created above for the n8n-user password): ``` gcloud secrets create n8n-db-password \ --data-file=/your/password/file \ --replication-policy="automatic" ``` Create an encryption key (you can use your own, this example generates a random one): ``` openssl rand -base64 -out my-encryption-key 42 ``` Create a secret for this encryption key (replace "my-encryption-key" if you are supplying your own): ``` gcloud secrets create n8n-encryption-key \ --data-file=my-encryption-key \ --replication-policy="automatic" ``` Now you can delete my-encryption-key and the database password files you created. These values are now securely stored in Secret Manager. ## Create a service account for Cloud Run You want this Cloud Run service to be restricted to access only the resources it needs. The following commands create the service account and adds the permissions necessary to access secrets and the database: ``` gcloud iam service-accounts create n8n-service-account \ --display-name="n8n Service Account" gcloud secrets add-iam-policy-binding n8n-db-password \ --member="serviceAccount:n8n-service-account@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" gcloud secrets add-iam-policy-binding n8n-encryption-key \ --member="serviceAccount:n8n-service-account@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/secretmanager.secretAccessor" gcloud projects add-iam-policy-binding $PROJECT_ID \ --member="serviceAccount:n8n-service-account@$PROJECT_ID.iam.gserviceaccount.com" \ --role="roles/cloudsql.client" ``` ## Deploy the Cloud Run service Now you can deploy your n8n service: ``` gcloud run deploy n8n \ --image=n8nio/n8n:latest \ --command="/bin/sh" \ --args="-c,sleep 5;n8n start" \ --region=$REGION \ --allow-unauthenticated \ --port=5678 \ --memory=2Gi \ --no-cpu-throttling \ --set-env-vars="N8N_PORT=5678,N8N_PROTOCOL=https,N8N_ENDPOINT_HEALTH=health,DB_TYPE=postgresdb,DB_POSTGRESDB_DATABASE=n8n,DB_POSTGRESDB_USER=n8n-user,DB_POSTGRESDB_HOST=/cloudsql/$PROJECT_ID:$REGION:n8n-db,DB_POSTGRESDB_PORT=5432,DB_POSTGRESDB_SCHEMA=public,GENERIC_TIMEZONE=UTC,QUEUE_HEALTH_CHECK_ACTIVE=true" \ --set-secrets="DB_POSTGRESDB_PASSWORD=n8n-db-password:latest,N8N_ENCRYPTION_KEY=n8n-encryption-key:latest" \ --add-cloudsql-instances=$PROJECT_ID:$REGION:n8n-db \ --service-account=n8n-service-account@$PROJECT_ID.iam.gserviceaccount.com ``` Once the deployment finishes, open another tab to navigate to the Service URL. You should see the n8n login screen. ## Troubleshooting If you see a "Cannot GET /" screen this usually indicates that n8n is still starting up. You can refresh the page and it should eventually load. ## (Optional) Enabling Google Workspace services as n8n tools If you want to use Google Workspace services (Gmail, Calendar, Drive, etc.) as tools in n8n, it's recommended to setup OAuth to access these services. First ensure the respective APIs you want are enabled: ``` ## Enable whichever APIs you need ## Note: If you want Sheets/Docs, it's not enough to just enable Drive; these services each have their own API gcloud services enable gmail.googleapis.com gcloud services enable drive.googleapis.com gcloud services enable sheets.googleapis.com gcloud services enable docs.googleapis.com gcloud services enable calendar-json.googleapis.com ``` Re-deploy n8n on Cloud Run with the necessary OAuth callback URLs as environment variables: ``` export SERVICE_URL="your-n8n-service-URL" ## e.g. https://n8n-12345678.us-west1.run.app gcloud run services update n8n \ --region=$REGION \ --update-env-vars="N8N_HOST=$(echo $SERVICE_URL | sed 's/https:\/\///'),WEBHOOK_URL=$SERVICE_URL,N8N_EDITOR_BASE_URL=$SERVICE_URL" ``` Lastly, you must setup OAuth for these services. Visit `https://console.cloud.google.com/auth` and follow these steps: 1. Click "Get Started" if this button shows (when you have not yet setup OAuth in this Cloud project). 1. For "App Information", enter whichever "App Name" and "User Support Email" you prefer. 1. For "Audience", select "Internal" if you intend to only enable access to your user(s) within this same Google Workspace. Otherwise, you can select "External". 1. Enter "Contact Information". 1. If you selected "External", then click "Audience" and add any test users you need to grant access. 1. Click "Clients" > "Create client", select "Web application" for "Application type", enter your n8n service URL into "Authorized JavaScript origins", and "/rest/oauth2-credential/callback" into "Authorized redirect URIs" where your YOUR-N8N-URL is also the n8n service URL (e.g. `https://n8n-12345678.us-west1.run.app/rest/oauth2-credential/callback`). Make sure you download the created client's JSON file since it contains the client secret which you will not be able to see later in the Console. 1. Click "Data Access" and add the scopes you want n8n to have access for (e.g. to access Google Sheets, you need `https://googleapis.com/auth/drive.file` and `https://googleapis.com/auth/spreadsheets`) 1. Now you should be able to use these workspace services. You can test if it works by logging into n8n, add a Tool for the respective service and add its credentials using the information in the OAuth client JSON file from step 6. # Hosting n8n on Google Kubernetes Engine Google Cloud offers several options suitable for hosting n8n, including Cloud Run (optimized for running containers), Compute Engine (VMs), and Kubernetes Engine (containers running with Kubernetes). This guide uses the Google Kubernetes Engine (GKE) as the hosting option. If you want to use Cloud Run, refer to [these instructions](../google-cloud-run/). Most of the steps in this guide use the Google Cloud UI, but you can also use the [gcloud command line tool](https://cloud.google.com/sdk/gcloud/) instead to undertake all the steps. ## Prerequisites - The [gcloud command line tool](https://cloud.google.com/sdk/gcloud/) - The [gke-gcloud-auth-plugin](https://cloud.google.com/blog/products/containers-kubernetes/kubectl-auth-changes-in-gke) (install the gcloud CLI first) Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Create project GCP encourages you to create projects to logically organize resources and configuration. Create a new project for your n8n deployment from your Google Cloud Console: select the project dropdown menu and then the **NEW PROJECT** button. Then select the newly created project. As you follow the other steps in this guide, make sure you have the correct project selected. ## Enable the Kubernetes Engine API GKE isn't enabled by default. Search for "Kubernetes" in the top search bar and select "Kubernetes Engine" from the results. Select **ENABLE** to enable the Kubernetes Engine API for this project. ## Create a cluster From the [GKE service page](https://console.cloud.google.com/kubernetes/list/overview), select **Clusters** > **CREATE**. Make sure you select the "Standard" cluster option, n8n doesn't work with an "Autopilot" cluster. You can leave the cluster configuration on defaults unless there's anything specifically you need to change, such as location. ## Set Kubectl context The rest of the steps in this guide require you to set the GCP instance as the Kubectl context. You can find the connection details for a cluster instance by opening its details page and selecting **CONNECT**. The displayed code snippet shows a connection string for the gcloud CLI tool. Paste and run the code snippet in the gcloud CLI to change your local Kubernetes settings to use the new gcloud cluster. ## Clone configuration repository Kubernetes and n8n require a series of configuration files. You can clone these from [this repository](https://github.com/n8n-io/n8n-hosting) locally. The following steps explain the file configuration and how to add your information. Clone the repository with the following command: ``` git clone https://github.com/n8n-io/n8n-hosting.git ``` And change directory: ``` cd n8n-hosting/kubernetes ``` ## Configure Postgres For larger scale n8n deployments, Postgres provides a more robust database backend than SQLite. ### Create a volume for persistent storage To maintain data between pod restarts, the Postgres deployment needs a persistent volume. Running Postgres on GCP requires a specific Kubernetes Storage Class. You can read [this guide](https://cloud.google.com/architecture/deploying-highly-available-postgresql-with-gke) for specifics, but the `storage.yaml` manifest creates it for you. You may want to change the regions to create the storage in under the `allowedTopologies` > `matchedLabelExpressions` > `values` key. By default, they're set to `us-central`. ``` … allowedTopologies: - matchLabelExpressions: - key: failure-domain.beta.kubernetes.io/zone values: - us-central1-b - us-central1-c ``` ### Postgres environment variables Postgres needs some environment variables set to pass to the application running in the containers. The example `postgres-secret.yaml` file contains placeholders you need to replace with your own values. Postgres will use these details when creating the database.. The `postgres-deployment.yaml` manifest then uses the values from this manifest file to send to the application pods. ## Configure n8n ### Create a volume for file storage While not essential for running n8n, using persistent volumes is required for: - Using nodes that interact with files, such as the binary data node. - If you want to persist [manual n8n encryption keys](../../../configuration/environment-variables/deployment/) between restarts. This saves a file containing the key into file storage during startup. The `n8n-claim0-persistentvolumeclaim.yaml` manifest creates this, and the n8n Deployment mounts that claim in the `volumes` section of the `n8n-deployment.yaml` manifest. ``` … volumes: - name: n8n-claim0 persistentVolumeClaim: claimName: n8n-claim0 … ``` ### Pod resources [Kubernetes lets you](https://kubernetes.io/docs/concepts/configuration/manage-resources-containers/) optionally specify the minimum resources application containers need and the limits they can run to. The example YAML files cloned above contain the following in the `resources` section of the `n8n-deployment.yaml` and `postgres-deployment.yaml` files: ``` … resources: requests: memory: "250Mi" limits: memory: "500Mi" … ``` This defines a minimum of 250mb per container, a maximum of 500mb, and lets Kubernetes handle CPU. You can change these values to match your own needs. As a guide, here are the resources values for the n8n cloud offerings: - **Start**: 320mb RAM, 10 millicore CPU burstable - **Pro (10k executions)**: 640mb RAM, 20 millicore CPU burstable - **Pro (50k executions)**: 1280mb RAM, 80 millicore CPU burstable ### Optional: Environment variables You can configure n8n settings and behaviors using environment variables. Create an `n8n-secret.yaml` file. Refer to [Environment variables](../../../configuration/environment-variables/) for n8n environment variables details. ## Deployments The two deployment manifests (`n8n-deployment.yaml` and `postgres-deployment.yaml`) define the n8n and Postgres applications to Kubernetes. The manifests define the following: - Send the environment variables defined to each application pod - Define the container image to use - Set resource consumption limits with the `resources` object - The `volumes` defined earlier and `volumeMounts` to define the path in the container to mount volumes. - Scaling and restart policies. The example manifests define one instance of each pod. You should change this to meet your needs. ## Services The two service manifests (`postgres-service.yaml` and `n8n-service.yaml`) expose the services to the outside world using the Kubernetes load balancer using ports 5432 and 5678 respectively. ## Send to Kubernetes cluster Send all the manifests to the cluster with the following command: ``` kubectl apply -f . ``` Namespace error You may see an error message about not finding an "n8n" namespace as that resources isn't ready yet. You can run the same command again, or apply the namespace manifest first with the following command: ``` kubectl apply -f namespace.yaml ``` ## Set up DNS n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the n8n service. Find the IP address of the n8n service from the **Services & Ingress** menu item of the cluster you want to use under the **Endpoints** column. GKE and IP addresses [Read this GKE tutorial](https://cloud.google.com/kubernetes-engine/docs/tutorials/configuring-domain-name-static-ip#configuring_your_domain_name_records) for more details on how reserved IP addresses work with GKE and Kubernetes resources. ## Delete resources Remove the resources created by the manifests with the following command: ``` kubectl delete -f . ``` ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on Heroku This hosting guide shows you how to self-host n8n on Heroku. It uses: - [Docker Compose](https://docs.docker.com/compose/) to create and define the application components and how they work together. - [Heroku's PostgreSQL service](https://devcenter.heroku.com/categories/heroku-postgres) to host n8n's data storage. - A **Deploy to Heroku** button offering a one click, with minor configuration, deployment. Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Use the deployment template to create a Heroku project The quickest way to get started with deploying n8n to Heroku is using the **Deploy to Heroku** button: This opens the **Create New App** page on Heroku. Set a name for the project, and choose the region to deploy the project to. ### Configure environment variables Heroku pre-fills the configuration options defined in the `env` section of the `app.json` file, which also sets default values for the environment variables n8n uses. You can change any of these values to suit your needs. You must change the following values: - **N8N_ENCRYPTION_KEY**, which n8n uses to [encrypt user account details](../../../configuration/environment-variables/deployment/) before saving to the database. - **WEBHOOK_URL** should match the application name you create to ensure that webhooks have the correct URL. ### Deploy n8n Select **Deploy app**. After Heroku builds and deploys the app it provides links to **Manage App** or **View** the application. Heroku and DNS Refer to the [Heroku documentation](https://devcenter.heroku.com/categories/networking-dns) to find out how to connect your domain to a Heroku application. ## Changing the deployment template You can make changes to the deployment template by forking the [repository](https://github.com/n8n-io/n8n-heroku) and deploying from you fork. ### The Dockerfile By default the Dockerfile pulls the latest n8n image, if you want to use a different or fixed version, then update the image tag on the top line of the `Dockerfile`. ### Heroku and exposing ports Heroku doesn't allow Docker-based applications to define an exposed port with the `EXPOSE` command. Instead, Heroku provides a `PORT` environment variable that it dynamically populates at application runtime. The `entrypoint.sh` file overrides the default Docker image command to instead set the port variable that Heroku provides. You can then access n8n on port 80 in a web browser. Docker limitations with Heroku [Read this guide](https://devcenter.heroku.com/articles/container-registry-and-runtime#unsupported-dockerfile-commands) for more details on the limitations of using Docker with Heroku. ### Configuring Heroku The `heroku.yml` file defines the application you want to create on Heroku. It consists of two sections: - `setup` > `addons` defines the Heroku addons to use. In this case, the PostgreSQL database addon. - The `build` section defines how Heroku builds the application. In this case it uses the Docker buildpack to build a `web` service based on the supplied `Dockerfile`. ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on Hetzner cloud This hosting guide shows you how to self-host n8n on a Hetzner cloud server. It uses: - [Caddy](https://caddyserver.com) (a reverse proxy) to allow access to the Server from the internet. - [Docker Compose](https://docs.docker.com/compose/) to create and define the application components and how they work together. Self-hosting knowledge prerequisites Self-hosting n8n requires technical knowledge, including: - Setting up and configuring servers and containers - Managing application resources and scaling - Securing servers and applications - Configuring n8n n8n recommends self-hosting for expert users. Mistakes can lead to data loss, security issues, and downtime. If you aren't experienced at managing servers, n8n recommends [n8n Cloud](https://n8n.io/cloud/). Stable and Beta versions n8n releases a new minor version most weeks. The `stable` version is for production use. `beta` is the most recent release. The `beta` version may be unstable. To report issues, use the [forum](https://community.n8n.io/c/questions/12). Current `stable`: 2.20.6 Current `beta`: 2.21.0 ## Create a server 1. [Log in](https://console.hetzner.cloud/) to the Hetzner Cloud Console. 1. Select the project to host the server, or create a new project by selecting **+ NEW PROJECT**. 1. Select **+ CREATE SERVER** on the project tile you want to add it to. You can change most of the settings to suit your needs, but as this guide uses Docker to run the application, under the **Image** section, select "Docker CE" from the **APPS** tab. Type When creating the server, Hetzner asks you to choose a plan. For most usage levels, the CPX11 type is enough. SSH keys Hetzner lets you choose between SSH and password-based authentication. SSH is more secure. The rest of this guide assumes you are using SSH. ## Log in to your server The rest of this guide requires you to log in to the server using a terminal with SSH. Refer to [Access with SSH/rsync/BorgBackup](https://docs.hetzner.com/robot/storage-box/access/access-ssh-rsync-borg) for more information. You can find the public IP in the listing of the servers in your project. ## Install Docker Compose The Hetzner Docker app image doesn't have Docker compose installed. Install it with the following commands: ``` apt update && apt -y upgrade apt install docker-compose-plugin ``` ## Clone configuration repository Docker Compose, n8n, and Caddy require a series of folders and configuration files. You can clone these from [this repository](https://github.com/n8n-io/n8n-docker-caddy) into the root user folder of the server. The following steps will tell you which file to change and what changes to make. Clone the repository with the following command: ``` git clone https://github.com/n8n-io/n8n-docker-caddy.git ``` And change directory to the root of the repository you cloned: ``` cd n8n-docker-caddy ``` ## Default folders and files The host operating system (the server) copies the two folders you created to Docker containers to make them available to Docker. The two folders are: - `caddy_config`: Holds the Caddy configuration files. - `local_files`: A folder for files you upload or add using n8n. ### Create Docker volume To persist the Caddy cache between restarts and speed up start times, create [a Docker volume](https://docs.docker.com/storage/volumes/) that Docker reuses between restarts: ``` docker volume create caddy_data ``` Create a Docker volume for the n8n data: ``` sudo docker volume create n8n_data ``` ## Set up DNS n8n typically operates on a subdomain. Create a DNS record with your provider for the subdomain and point it to the IP address of the server. The exact steps for this depend on your DNS provider, but typically you need to create a new "A" record for the n8n subdomain. DigitalOcean provide [An Introduction to DNS Terminology, Components, and Concepts](https://www.digitalocean.com/community/tutorials/an-introduction-to-dns-terminology-components-and-concepts). ## Open ports n8n runs as a web application, so the server needs to allow incoming access to traffic on port 80 for non-secure traffic, and port 443 for secure traffic. Open the following ports in the server's firewall by running the following two commands: ``` sudo ufw allow 80 sudo ufw allow 443 ``` ## Configure n8n n8n needs some environment variables set to pass to the application running in the Docker container. The example `.env` file contains placeholders you need to replace with values of your own. Open the file with the following command: ``` nano .env ``` The file contains inline comments to help you know what to change. Refer to [Environment variables](../../../configuration/environment-variables/) for n8n environment variables details. ## The Docker Compose file The Docker Compose file (`docker-compose.yml`) defines the services the application needs, in this case Caddy and n8n. - The Caddy service definition defines the ports it uses and the local volumes to copy to the containers. - The n8n service definition defines the ports it uses, the environment variables n8n needs to run (some defined in the `.env` file), and the volumes it needs to copy to the containers. The Docker Compose file uses the environment variables set in the `.env` file, so you shouldn't need to change it's content, but to take a look, run the following command: ``` nano docker-compose.yml ``` ## Configure Caddy Caddy needs to know which domains it should serve, and which port to expose to the outside world. Edit the `Caddyfile` file in the `caddy_config` folder. ``` nano caddy_config/Caddyfile ``` Change the placeholder subdomain to yours. If you followed the steps to name the subdomain n8n, your full domain is similar to `n8n.example.com`. The `n8n` in the `reverse_proxy` setting tells Caddy to use the service definition defined in the `docker-compose.yml` file: ``` n8n.. { reverse_proxy n8n:5678 { flush_interval -1 } } ``` ## Start Docker Compose Start n8n and Caddy with the following command: ``` docker compose up -d ``` This may take a few minutes. ## Test your setup In your browser, open the URL formed of the subdomain and domain name defined earlier. Enter the user name and password defined earlier, and you should be able to access n8n. ## Stop n8n and Caddy You can stop n8n and Caddy with the following command: ``` sudo docker compose stop ``` ## Updating If you run n8n using a Docker Compose file, follow these steps to update n8n: ``` # Navigate to the directory containing your docker compose file cd # Pull latest version docker compose pull # Stop and remove older version docker compose down # Start the container docker compose up -d ``` ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Hosting n8n on OpenShift Local (CRC) This guide walks you through deploying n8n on OpenShift Local (CRC), Red Hat's tool for running a local OpenShift cluster. It mirrors AWS/EKS deployment, but runs entirely on your local machine. It's designed for testing n8n in an OpenShift environment locally, without cloud costs. You will need a machine with significant resources available, given how many resources OpenShift itself consumes. ## OpenShift concepts vs standard Kubernetes OpenShift is built on Kubernetes but uses different terminology and has stricter security defaults. If you are familiar with standard Kubernetes, or with a guide that targets a managed Kubernetes service such as EKS, the table below maps the equivalent concepts so you know what to expect. | Standard Kubernetes / EKS | OpenShift Local (CRC) | | ---------------------------- | --------------------------------------------------------- | | `kubectl` | `oc` (OpenShift CLI; also understands `kubectl` commands) | | Namespace | Project (same concept, different command) | | Ingress / LoadBalancer | Route (built into OpenShift, no controller needed) | | EBS StorageClass (gp3) | CRC built-in storage provisioner (no setup needed) | | RDS PostgreSQL | In-cluster PostgreSQL via Helm (Bitnami) | | ElastiCache Redis | In-cluster Redis via Helm (Bitnami) | | AWS S3 | MinIO in-cluster (S3-compatible) | | Pod Identity / IRSA | Access keys via Kubernetes Secret | | AWS Load Balancer Controller | Not needed (Routes are built-in) | | OIDC / IAM | Not needed | | ~$135–400/month | Free (runs on your machine) | ## Prerequisites Before starting, confirm your machine has: - **CPU**: 4 or more physical cores (not just threads) with virtualization support - **RAM**: 32+ GB free minimum (CRC reserves 9 GB for its VM) - **Disk**: 100 GB free - **OS**: Ubuntu (22.04 LTS or newer) ## Prepare Ubuntu ### Open a terminal Press `Ctrl+Alt+T` or search for **Terminal** in the Applications menu. Every command in this guide is typed into the terminal and run by pressing **Enter**. ### Update your system Start with a system update to avoid dependency issues: ``` sudo apt update && sudo apt upgrade -y ``` sudo `sudo` means “run as administrator”. You will be prompted for your password. Characters you type won't appear on screen, this is normal. ### Check CPU virtualization support CRC runs a virtual machine. Your CPU must support hardware virtualization: ``` egrep -c '(vmx|svm)' /proc/cpuinfo ``` - **Output `0`**: Virtualization is disabled. Enter your BIOS/UEFI settings and enable VT-x (Intel) or AMD-V (AMD), then reboot and try again. - **Output `1` or higher**: You are good to continue. ### Install KVM and libvirt KVM is Linux’s built-in hypervisor. CRC uses it to run the OpenShift cluster VM: ``` sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients bridge-utils ``` Install `virtiofsd`, which CRC requires to share the filesystem with the cluster VM: ``` sudo apt install -y virtiofsd ``` Start the libvirt service and configure it to start automatically on boot: ``` sudo systemctl start libvirtd sudo systemctl enable libvirtd ``` Verify it's running: ``` sudo systemctl status libvirtd ``` Look for `Active: active (running)` in green. Press `q` to exit. ### Add user to required groups This allows you to use KVM and libvirt without typing `sudo` for every command: ``` sudo usermod -aG libvirt $USER sudo usermod -aG kvm $USER ``` Warning **You must log out and log back in (or reboot) for this to take effect.** If you skip this step, CRC will fail with a “permission denied” error. Reboot now: ``` sudo reboot ``` After logging back in, open a terminal and verify group membership: ``` groups ``` You should see `libvirt` and `kvm` listed. ### Install NetworkManager CRC requires NetworkManager to manage DNS entries for the cluster’s internal domains (`*.apps-crc.testing`, `api.crc.testing`): ``` sudo apt install -y network-manager sudo systemctl start NetworkManager sudo systemctl enable NetworkManager ``` Verify it's connected: ``` nmcli general status ``` The `STATE` column should show `connected`. ## Install tools ### Get a Red Hat account and pull secret CRC requires a free Red Hat account to pull container images. 1. [Create a free Red Hat account](https://console.redhat.com/), if you don't already have one. 1. In [console.redhat.com/openshift/create/local](https://console.redhat.com/openshift/create/local), click **Download OpenShift Local**. 1. Select **Linux**, and download the `.tar.xz` file to `~/Downloads`. 1. On the same page of the Red Hat console, click **Copy pull secret**. Paste it into a text file and save it for later. ### Install CRC Open a terminal in your Downloads folder. ``` cd ~/Downloads ``` Extract the archive. ``` tar xf crc-linux-amd64.tar.xz ``` Move the `crc` binary to a system-wide location, so it's available in any terminal: ``` sudo mv crc-*-linux-amd64/crc /usr/local/bin/ ``` Verify the installation: ``` crc version ``` A version number should print to the terminal. ### Install Helm Helm installs n8n and supporting services into the cluster: ``` curl https://raw.githubusercontent.com/helm/helm/main/scripts/get-helm-3 | bash ``` Verify: ``` helm version ``` ### Set environment variables ``` export NAMESPACE=n8n-$(date +%Y%m%d) echo "Namespace:$NAMESPACE" ``` Variable persistence These variables only last for the current terminal session. Re-run this line whenever you open a new terminal before continuing. ## Start OpenShift Local ### Run CRC setup You only need to run this once. It configures KVM networking, checks system requirements, and downloads the CRC bundle (~2.5 GB): ``` crc setup ``` This takes several minutes. If it reports any missing packages, install them with `sudo apt install -y ` and re-run. ### Configure CRC memory and start the cluster CRC defaults to 9 GB of RAM for its VM. n8n and its supporting services need more headroom. Set the memory to 14 GB before starting: ``` crc config set memory 14336 ``` You only need to run this once. The setting persists across `crc stop` / `crc start` cycles. **Recommended:** Save your pull secret to a file first so you don’t have to paste it every time: ``` # Open the file, paste your pull secret (from earlier), then Ctrl+O to save, Ctrl+X to exit nano ~/pull-secret.txt # Restrict permissions so only you can read it chmod 600 ~/pull-secret.txt ``` Start CRC using the file: ``` crc start --pull-secret-file ~/pull-secret.txt ``` Alternatively, run `crc start` without the flag and paste the secret when prompted. **This takes 10–15 minutes.** When complete you will see something like: ``` Started the OpenShift cluster. The server is accessible via web console at: https://console-openshift-console.apps-crc.testing Log in as administrator: Username: kubeadmin Password: Log in as user: Username: developer Password: developer ``` **Save the `kubeadmin` password now.** You will need it in the next step. You can retrieve it later using `crc console --credentials`. ### Verify DNS resolution On Ubuntu, CRC configures the system resolver automatically with NetworkManager and systemd-resolved. No manual `/etc/hosts` entries are needed. Verify the API is reachable: ``` sudo ss -tlnp | grep 6443 ``` You should see a process bound to `127.0.0.1:6443`. If nothing appears, re-run `crc start`. If DNS doesn't resolve `*.apps-crc.testing`, see the troubleshooting section. ### Configure your shell CRC bundles the `oc` CLI inside the VM. This command makes it available in your terminal: ``` eval $(crc oc-env) ``` To make this permanent so you don't have to run it every time you open a terminal: ``` echo 'eval $(crc oc-env)' >> ~/.bashrc source ~/.bashrc ``` Verify `oc` works: ``` oc version ``` ### Log in to the cluster ``` oc login -u kubeadmin -p https://api.crc.testing:6443 ``` Replace `` with the password printed when you [configured CRC memory and started the cluster](#configure-crc-memory-and-start-the-cluster). Verify you are logged in: ``` oc whoami ``` `kubeadmin` should print to the screen. ## Standalone deployment Standalone mode runs n8n as a single pod with SQLite. No external database or Redis is required. This is ideal for exploring n8n and testing workflows locally. ### Create the project In OpenShift, a **project** is the same as a Kubernetes namespace: an isolated space for your resources: ``` oc new-project $NAMESPACE ``` ### Grant the required security permission OpenShift enforces strict security policies called **Security Context Constraints (SCCs)**. By default, pods can't run with a specific user ID. The n8n chart runs as user ID `1000`, so you must explicitly allow this. Use the full explicit form. The shorthand `-z` flag can silently fail in some OpenShift versions: ``` oc adm policy add-scc-to-user anyuid \ system:serviceaccount:$NAMESPACE:n8n ``` Verify the binding was created: ``` oc get rolebindings -n $NAMESPACE ``` You should see a binding referencing `system:openshift:scc:anyuid`. ### Create the required secret ``` oc create secret generic n8n-secrets \ --namespace $NAMESPACE \ --from-literal=N8N_ENCRYPTION_KEY="$(openssl rand -hex 32)" \ --from-literal=N8N_HOST="localhost" \ --from-literal=N8N_PORT="5678" \ --from-literal=N8N_PROTOCOL="http" ``` **Back up the encryption key immediately:** ``` oc get secret n8n-secrets -n $NAMESPACE \ -o jsonpath='{.data.N8N_ENCRYPTION_KEY}' | base64 --decode ``` Copy that output and store it somewhere safe. Losing it means all stored credentials in your workflows become permanently unreadable. ### Create your values file Create a file called `n8n-standalone-values.yaml`. You can use `nano` (a simple text editor): ``` nano n8n-standalone-values.yaml ``` Paste the following, then press `Ctrl+O` to save and `Ctrl+X` to exit: ``` # n8n-standalone-values.yaml # Single pod, SQLite database, no external dependencies. queueMode: enabled: false database: type: sqlite useExternal: false redis: enabled: false # PVC stores the SQLite database file. persistence: enabled: true size: 5Gi # No storageClassName needed — CRC provides a default storage provisioner. secretRefs: existingSecret: "n8n-secrets" service: type: ClusterIP port: 5678 # OpenShift: securityContext must be enabled so the pod runs as UID 1000 (node user) # with fsGroup 1000 (so the PVC is writable). The anyuid SCC granted above # allows this. The seccompProfile line is removed from the chart template in # "Deploy n8n" because OpenShift 4.14+ rejects it even with anyuid. securityContext: enabled: true resources: main: requests: cpu: 100m memory: 256Mi limits: cpu: "1" memory: 1Gi config: timezone: UTC ``` ### Deploy n8n The n8n Helm chart hard codes `seccompProfile: RuntimeDefault` in the pod spec. OpenShift 4.14+ converts this to a deprecated alpha annotation that's rejected at admission, even when `anyuid` SCC is granted. The fix is to pull the chart locally, remove those two lines, and install from the patched copy. **Pull and patch the chart:** ``` helm pull oci://ghcr.io/n8n-io/n8n-helm-chart/n8n --version 1.0.3 --untar sed -i '/seccompProfile:/d; /type: RuntimeDefault/d' ~/n8n/templates/deployment-main.yaml # Confirm the lines are gone (should return no output) grep -n "seccomp\|RuntimeDefault" ~/n8n/templates/deployment-main.yaml ``` **Install from the patched chart:** ``` helm install n8n ~/n8n/ \ --namespace $NAMESPACE \ --values n8n-standalone-values.yaml \ --wait \ --timeout 10m ``` ### Access n8n using port forward OpenShift Routes require a hostname, which adds complexity for standalone local access. Port-forward is simpler: ``` oc port-forward service/n8n-main --namespace $NAMESPACE 5678:5678 ``` Leave this running, then open your browser to: ``` http://localhost:5678 ``` n8n will prompt you to create an owner account. Stop tunnel Press `Ctrl+C` to stop the tunnel. Re-run the `port-forward` command to access n8n again later. ### Check deployment status ``` oc get pods -n $NAMESPACE ``` Expected: ``` NAME READY STATUS RESTARTS AGE n8n-main-7d9f8b-xxxx 1/1 Running 0 3m ``` **Standalone deployment complete.** ## Multi-instance queue mode Multi-instance queue mode runs multiple n8n pods with a shared database, message queue, and object storage. It requires an [n8n Enterprise license](https://n8n.io/pricing/). Instead of AWS managed services, this guide uses in-cluster equivalents that mirror what you would find in an on-premises or customer OpenShift environment: | AWS Service | Local Equivalent | | ----------------- | ----------------------------------------- | | RDS PostgreSQL | PostgreSQL (Bitnami Helm chart) | | ElastiCache Redis | Redis (Bitnami Helm chart) | | S3 | MinIO (S3-compatible, Bitnami Helm chart) | ### Install in-cluster services #### Create the Project and add Bitnami Helm repo ``` oc new-project $NAMESPACE ``` Add the Bitnami chart repository (only needed once): ``` helm repo add bitnami https://charts.bitnami.com/bitnami helm repo update ``` #### Install PostgreSQL In the command below, replace `YourStrongPassword123` with a suitable complex password. ``` helm install postgresql bitnami/postgresql \ --namespace $NAMESPACE \ --set auth.username=n8n \ --set auth.password='YourStrongPassword123' \ --set auth.database=n8n_enterprise \ --set global.compatibility.openshift.adaptSecurityContext=auto \ --wait ``` Flag The `global.compatibility.openshift.adaptSecurityContext=auto` flag tells Bitnami to let OpenShift assign the correct user ID automatically (avoids SCC errors). Save the endpoint, as it's fixed for in-cluster services: ``` postgresql.YOUR_NAMESPACE.svc.cluster.local ``` Replace `YOUR_NAMESPACE` with your actual `$NAMESPACE` value (e.g. `n8n-20260306`). #### Install Redis ``` helm install redis bitnami/redis \ --namespace $NAMESPACE \ --set auth.enabled=false \ --set architecture=standalone \ --set global.compatibility.openshift.adaptSecurityContext=auto \ --wait ``` Redis endpoint: `redis-master.$NAMESPACE.svc.cluster.local` #### Install MinIO (S3-compatible storage) In the command below, replace `MinioStrongPassword123` with a suitable complex password. ``` helm install minio bitnami/minio \ --namespace $NAMESPACE \ --set auth.rootUser=minioadmin \ --set auth.rootPassword='MinioStrongPassword123' \ --set global.compatibility.openshift.adaptSecurityContext=auto \ --wait ``` MinIO endpoint: `http://minio:9000` (within the same namespace, just the service name works) #### Create the n8n storage bucket in MinIO MinIO needs a bucket created before n8n can use it. Use the MinIO web console: **Open the MinIO console:** ``` oc port-forward svc/minio 9001:9001 -n $NAMESPACE ``` Leave this running, then open your browser to `http://localhost:9001`. Log in with: - **Username:** `minioadmin` - **Password:** `MinioStrongPassword123` In the console: 1. Click **Buckets** in the left sidebar → **Create Bucket** 1. **Bucket Name:** `n8n-data` 1. Click **Create Bucket** Go back to the terminal and press `Ctrl+C` to stop the port-forward. ### Deploy n8n #### Grant SCC for n8n ``` oc adm policy add-scc-to-user anyuid \ system:serviceaccount:$NAMESPACE:n8n-enterprise ``` Verify that `oc get rolebindings -n $NAMESPACE` shows a binding for `system:openshift:scc:anyuid`. #### Create required secrets ``` # Core n8n secrets oc create secret generic n8n-enterprise-secrets \ --namespace $NAMESPACE \ --from-literal=N8N_ENCRYPTION_KEY="$(openssl rand -hex 32)" \ --from-literal=N8N_HOST="localhost" \ --from-literal=N8N_PORT="5678" \ --from-literal=N8N_PROTOCOL="http" ``` **Back up the encryption key immediately:** ``` oc get secret n8n-enterprise-secrets -n $NAMESPACE \ -o jsonpath='{.data.N8N_ENCRYPTION_KEY}' | base64 --decode ``` Store that value somewhere safe. In the commands below, replace `YourStrongPassword123` and `MinioStrongPassword123` with the passwords from the earlier steps. ``` # Database password (must match what you set when installing PostgreSQL) oc create secret generic n8n-enterprise-db-secret \ --namespace $NAMESPACE \ --from-literal=password='YourStrongPassword123' # MinIO credentials oc create secret generic n8n-minio-secret \ --namespace $NAMESPACE \ --from-literal=root-password='MinioStrongPassword123' ``` #### Create values file Create `n8n-multimain-ocp-values.yaml`. Replace the **3 placeholder values** marked `# <-- REPLACE`: ``` nano n8n-multimain-ocp-values.yaml ``` ``` # n8n-multimain-ocp-values.yaml # Multi-instance queue mode for OpenShift Local (CRC). # Uses in-cluster PostgreSQL, Redis, and MinIO instead of AWS services. # Requires Enterprise license. # --- Enterprise license --- license: enabled: true activationKey: "your-enterprise-license-key-here" # <-- REPLACE # --- Multi-main: 2 replicas (reduced for local resources) --- multiMain: enabled: true replicas: 2 # --- Queue mode: 2 worker pods --- queueMode: enabled: true workerReplicaCount: 2 workerConcurrency: 5 # --- Webhook processors --- webhookProcessor: enabled: true replicaCount: 1 disableProductionWebhooksOnMainProcess: true # --- PostgreSQL (in-cluster) --- database: type: postgresdb useExternal: true host: "postgresql.YOUR_NAMESPACE.svc.cluster.local" # <-- REPLACE YOUR_NAMESPACE port: 5432 database: n8n_enterprise schema: "public" user: n8n passwordSecret: name: "n8n-enterprise-db-secret" key: "password" # --- Redis (in-cluster, no TLS) --- redis: enabled: true useExternal: true host: "redis-master.YOUR_NAMESPACE.svc.cluster.local" # <-- REPLACE YOUR_NAMESPACE port: 6379 tls: false # --- MinIO (S3-compatible, in-cluster) --- s3: enabled: true bucket: name: "n8n-data" region: "us-east-1" host: "http://minio:9000" auth: autoDetect: false accessKeyId: "minioadmin" secretAccessKeySecret: name: "n8n-minio-secret" key: "root-password" storage: mode: "s3" availableModes: "filesystem,s3" forcePathStyle: true # --- Service account --- serviceAccount: create: true name: n8n ``` Save and exit nano (`Ctrl+O`, `Ctrl+X`). **Before deploying**, replace the two `YOUR_NAMESPACE` placeholders with your actual namespace value: ``` # Check your namespace value echo $NAMESPACE # Replace in the file (this edits it automatically) sed -i "s/YOUR_NAMESPACE/$NAMESPACE/g" n8n-multimain-ocp-values.yaml ``` Verify the replacements: ``` grep "svc.cluster.local" n8n-multimain-ocp-values.yaml ``` Both lines should show your actual namespace name, not `YOUR_NAMESPACE`. #### Deploy n8n If you didn't patch the chart previously, pull and patch it now: ``` helm pull oci://ghcr.io/n8n-io/n8n-helm-chart/n8n --version 1.0.3 --untar sed -i '/seccompProfile:/d; /type: RuntimeDefault/d' ~/n8n/templates/deployment-main.yaml grep -n "seccomp\|RuntimeDefault" ~/n8n/templates/deployment-main.yaml # should return nothing ``` Install from the patched chart: ``` helm install n8n ~/n8n/ \ --namespace $NAMESPACE \ --values n8n-multimain-ocp-values.yaml \ --wait \ --timeout 15m ``` #### Create a route for external access In OpenShift, a **Route** exposes a service to the outside world. It's the equivalent of a Kubernetes Ingress or LoadBalancer, and requires no extra controller: ``` oc expose svc/n8n-main -n $NAMESPACE ``` Get the URL: ``` export ROUTE=$(oc get route n8n-main -n $NAMESPACE -o jsonpath='{.spec.host}') echo "n8n URL: http://$ROUTE" ``` The URL will look like: `http://n8n-main-n8n-20260306.apps-crc.testing` #### Update the host secret n8n needs to know its public URL. Update the secret with the Route hostname, then restart the pods: ``` ENCRYPTION_KEY=$(oc get secret n8n-enterprise-secrets -n $NAMESPACE \ -o jsonpath='{.data.N8N_ENCRYPTION_KEY}' | base64 --decode) oc create secret generic n8n-enterprise-secrets \ --namespace $NAMESPACE \ --from-literal=N8N_ENCRYPTION_KEY="$ENCRYPTION_KEY" \ --from-literal=N8N_HOST="$ROUTE" \ --from-literal=N8N_PORT="5678" \ --from-literal=N8N_PROTOCOL="http" \ --dry-run=client -o yaml | oc apply -f - oc rollout restart deployment -n $NAMESPACE ``` Wait for the rollout to complete: ``` oc rollout status deployment/n8n-main -n $NAMESPACE ``` #### Verify all pods are running ``` oc get pods -n $NAMESPACE ``` Expected (all `Running`): ``` NAME READY STATUS RESTARTS AGE n8n-main-xxxx-aaaa 1/1 Running 0 5m n8n-main-xxxx-bbbb 1/1 Running 0 5m n8n-worker-xxxx-aaaa 1/1 Running 0 5m n8n-worker-xxxx-bbbb 1/1 Running 0 5m n8n-webhook-processor-xxxx-aaaa 1/1 Running 0 5m postgresql-0 1/1 Running 0 15m redis-master-0 1/1 Running 0 15m minio-xxxx-xxxx 1/1 Running 0 15m ``` Open your browser to the URL printed above. **Multi-instance deployment complete.** ## Updating n8n To change configuration or upgrade the chart version, pull and re-patch the new chart version, then upgrade: ``` # Remove the old local chart copy rm -rf ~/n8n/ # Pull and patch the new version helm pull oci://ghcr.io/n8n-io/n8n-helm-chart/n8n --version --untar sed -i '/seccompProfile:/d; /type: RuntimeDefault/d' ~/n8n/templates/deployment-main.yaml # Standalone helm upgrade n8n ~/n8n/ \ --namespace $NAMESPACE \ --values n8n-standalone-values.yaml # Multi-instance helm upgrade n8n ~/n8n/ \ --namespace $NAMESPACE \ --values n8n-multimain-ocp-values.yaml ``` ## Stopping and resuming CRC CRC doesn't need to be deleted between sessions. You can stop and restart it: ``` # Stop the cluster (saves state) crc stop # Start it again later crc start ``` After restarting, re-run: ``` eval $(crc oc-env) export NAMESPACE=n8n-YYYYMMDD # use your original date oc login -u kubeadmin -p https://api.crc.testing:6443 ``` ## Troubleshooting ### `crc setup` fails with “libvirt not found” ``` sudo apt install -y qemu-kvm libvirt-daemon-system libvirt-clients sudo systemctl start libvirtd ``` Then re-run `crc setup`. ### `crc start` fails with “insufficient memory” CRC requires at least 9 GB of free RAM. Close other applications and try again. If you [followed instructions for configuring CRC memory](#configure-crc-memory-and-start-the-cluster), CRC is configured to use 14 GB. ### n8n pod stuck in `Pending` or never created SCC error Check events for the error: ``` oc get events -n $NAMESPACE --sort-by='.lastTimestamp' | tail -20 ``` If you see `unable to validate against any security context constraint` or `seccomp may not be set`, the chart’s hard coded `seccompProfile: RuntimeDefault` is being rejected. OpenShift 4.14+ converts this to a deprecated alpha annotation that admission rejects even when `anyuid` SCC is granted. **1. Grant anyuid using the explicit form** (the `-z` shorthand can silently fail): ``` # For standalone oc adm policy add-scc-to-user anyuid \ system:serviceaccount:$NAMESPACE:n8n # For multi-instance oc adm policy add-scc-to-user anyuid \ system:serviceaccount:$NAMESPACE:n8n-enterprise ``` Verify: run `oc get rolebindings -n $NAMESPACE`. You should see a binding for `system:openshift:scc:anyuid`. **2. Pull the chart locally and remove the `seccompProfile` lines:** ``` helm pull oci://ghcr.io/n8n-io/n8n-helm-chart/n8n --version 1.0.3 --untar sed -i '/seccompProfile:/d; /type: RuntimeDefault/d' ~/n8n/templates/deployment-main.yaml # Confirm they're gone (should return no output) grep -n "seccomp\|RuntimeDefault" ~/n8n/templates/deployment-main.yaml ``` **3. Uninstall and reinstall from the patched chart:** ``` helm uninstall n8n -n $NAMESPACE helm install n8n ~/n8n/ \ --namespace $NAMESPACE \ --values n8n-standalone-values.yaml \ --wait \ --timeout 10m ``` ### Route URL returns “Application not available” The pods may still be starting. Check: ``` oc get pods -n $NAMESPACE oc rollout status deployment/n8n-main -n $NAMESPACE ``` Also confirm the Route exists: ``` oc get route -n $NAMESPACE ``` ### n8n pod stuck in `Pending` with `Insufficient memory` The CRC node doesn’t have enough free memory to schedule the pod. **Fix:** Increase CRC’s VM memory and restart: ``` crc stop crc config set memory 14336 crc start ``` After CRC restarts, the pod should schedule automatically. If the pod is still pending after a few minutes, delete it to force a reschedule: ``` oc delete pod -n $NAMESPACE -l app.kubernetes.io/component=main ``` If your machine can’t spare 14 GB, you can also lower the pod’s memory request in `n8n-standalone-values.yaml`: ``` resources: main: requests: memory: 256Mi ``` Then upgrade: `helm upgrade n8n ~/n8n/ -n $NAMESPACE -f n8n-standalone-values.yaml` ### DNS not resolving `.apps-crc.testing` or `api.crc.testing` On Ubuntu, CRC configures DNS automatically. If it fails, restart NetworkManager: ``` sudo systemctl restart NetworkManager ``` If still broken, add entries manually (CRC routes traffic through `127.0.0.1`): ``` sudo tee -a /etc/hosts < https://api.crc.testing:6443 ``` ### Check cluster status ``` crc status ``` ### Open the OpenShift web console ``` crc console ``` Log in with `kubeadmin` / your password to see a graphical view of everything running. ### Things to save | Item | Why it matters | | ------------------------------- | --------------------------------------------- | | `kubeadmin` password | Log in to the cluster | | n8n encryption key | Lose this = all stored credentials unreadable | | `n8n-standalone-values.yaml` | Required for `helm upgrade` | | `n8n-multimain-ocp-values.yaml` | Required for `helm upgrade` | | MinIO root password | Access the MinIO console | | PostgreSQL password | Database access | ## Next steps - Learn more about [configuring](../../../configuration/environment-variables/) and [scaling](../../../scaling/overview/) n8n. - Or explore using n8n: try the [Quickstarts](../../../../try-it-out/). # Logging in n8n Logging is an important feature for debugging. n8n uses the [winston](https://www.npmjs.com/package/winston) logging library. Log streaming n8n Self-hosted Enterprise tier includes [Log streaming](../../../log-streaming/), in addition to the logging options described in this document. ## Setup To set up logging in n8n, you need to set the following environment variables (you can also set the values in the [configuration file](../../configuration/environment-variables/)) | Setting in the configuration file | Using environment variables | Description | | --------------------------------- | --------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | n8n.log.level | N8N_LOG_LEVEL | The log output level. The available options are (from lowest to highest level) are error, warn, info, and debug. The default value is `info`. You can learn more about these options [here](#log-levels). | | n8n.log.output | N8N_LOG_OUTPUT | Where to output logs. The available options are `console` and `file`. Multiple values can be used separated by a comma (`,`). `console` is used by default. | | n8n.log.file.location | N8N_LOG_FILE_LOCATION | The log file location, used only if log output is set to file. By default, `/logs/n8n.log` is used. | | n8n.log.file.fileSizeMax | N8N_LOG_FILE_SIZE_MAX | The maximum size (in MB) for each log file. By default, n8n uses 16 MB. | | n8n.log.file.fileCountMax | N8N_LOG_FILE_COUNT_MAX | The maximum number of log files to keep. The default value is 100. This value should be set when using workers. | ``` # Set the logging level to 'debug' export N8N_LOG_LEVEL=debug # Set log output to both console and a log file export N8N_LOG_OUTPUT=console,file # Set a save location for the log file export N8N_LOG_FILE_LOCATION=/home/jim/n8n/logs/n8n.log # Set a 50 MB maximum size for each log file export N8N_LOG_FILE_SIZE_MAX=50 # Set 60 as the maximum number of log files to be kept export N8N_LOG_FILE_COUNT_MAX=60 ``` ### Log levels n8n uses standard log levels to report: - `silent`: outputs nothing at all - `error`: outputs only errors and nothing else - `warn`: outputs errors and warning messages - `info`: contains useful information about progress - `debug`: the most verbose output. n8n outputs a lot of information to help you debug issues. ## Development During development, adding log messages is a good practice. It assists in debugging errors. To configure logging for development, follow the guide below. ### Implementation details n8n uses the `LoggerProxy` class, located in the `workflow` package. Calling the `LoggerProxy.init()` by passing in an instance of `Logger`, initializes the class before the usage. The initialization process happens only once. The [`start.ts`](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/commands/start.ts) file already does this process for you. If you are creating a new command from scratch, you need to initialize the `LoggerProxy` class. Once the `Logger` implementation gets created in the `cli` package, it can be obtained by calling the `getInstance` convenience method from the exported module. Check the [start.ts](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/commands/start.ts) file to learn more about how this process works. ### Adding logs Once the `LoggerProxy` class gets initialized in the project, you can import it to any other file and add logs. Convenience methods are provided for all logging levels, so new logs can be added whenever needed using the format `Logger.('', ...meta)`, where `meta` represents any additional properties desired beyond `message`. In the example above, we use the standard log levels described [above](#log-levels). The `message` argument is a string, and `meta` is a data object. ``` // You have to import the LoggerProxy. We rename it to Logger to make it easier import { LoggerProxy as Logger } from 'n8n-workflow'; // Info-level logging of a trigger function, with workflow name and workflow ID as additional metadata properties Logger.info(`Polling trigger initiated for workflow "${workflow.name}"`, {workflowName: workflow.name, workflowId: workflow.id}); ``` When creating new loggers, some useful standards to keep in mind are: - Craft log messages to be as human-readable as possible. For example, always wrap names in quotes. - Duplicating information in the log message and metadata, like workflow name in the above example, can be useful as messages are easier to search and metadata enables easier filtering. - Include multiple IDs (for example, `executionId`, `workflowId`, and `sessionId`) throughout all logs. - Use node types instead of node names (or both) as this is more consistent, and so easier to search. ## Front-end logs As of now, front-end logs aren't available. Using `Logger` or `LoggerProxy` would yield errors in the `editor-ui` package. This functionality will get implemented in the future versions. # Monitoring There are three API endpoints you can call to check the status of your instance: `/healthz`, `healthz/readiness`, and `/metrics`. ## healthz and healthz/readiness The `/healthz` endpoint returns a standard HTTP status code. 200 indicates the instance is reachable. It doesn't indicate DB status. It's available for both self-hosted and Cloud users. Access the endpoint: ``` /healthz ``` The `/healthz/readiness` endpoint is similar to the `/healthz` endpoint, but it returns a HTTP status code of 200 if the DB is connected and migrated and therefore the instance is ready to accept traffic. Access the endpoint: ``` /healthz/readiness ``` Customizing health check endpoints You can customize the health check endpoint path using the [`N8N_ENDPOINT_HEALTH`](../../configuration/environment-variables/endpoints/) environment variable. ## metrics The `/metrics` endpoint provides more detailed information about the current status of the instance. Access the endpoint: ``` /metrics ``` Feature availability The `/metrics` endpoint isn't available on n8n Cloud. ## Enable metrics and health checks for self-hosted n8n The `/metrics` endpoint is disabled by default. The health endpoint is always enabled on the main n8n server. For worker servers in [queue mode](../../scaling/queue-mode/), the health endpoint is disabled by default. To enable them, configure your n8n instance: ``` # metrics N8N_METRICS=true # healthz QUEUE_HEALTH_CHECK_ACTIVE=true ``` Refer to [Configuration methods](../../configuration/configuration-methods/) for more information on how to configure your instance using environment variables. # OpenTelemetry tracing This feature is still under development - Initially available from 2.19.0 - Open telemetry formatted metrics will be coming soon n8n can emit [OpenTelemetry](https://opentelemetry.io/) traces for workflow and node executions. Use these traces to monitor execution latency, debug failures, and track requests across services in your observability stack. Feature availability OpenTelemetry workflow tracing is only available on self-hosted n8n. ## What you get When you turn on tracing, n8n exports two kinds of spans for each execution: - **`workflow.execute`**: One span per workflow execution. It records the workflow ID, name, version, node count, execution mode, status, and any error type. - **`node.execute`**: One span per node execution, nested inside its workflow span. It records the node ID, name, type, version, and the number of input and output items. Each span includes resource attributes that identify the n8n instance: - `service.name` (default `n8n`) - `service.version` (the n8n version) - `n8n.instance.id` - `n8n.instance.role` (for example, `main`, `worker`, or `webhook`) n8n also handles trace context propagation: - **Inbound**: If a webhook request includes a [W3C `traceparent` header](https://www.w3.org/TR/trace-context/), n8n uses it as the parent for the workflow span. This links the n8n workflow trace to the upstream caller. - **Outbound**: HTTP Request nodes (and other nodes that use the n8n HTTP helpers) can inject a `traceparent` header into outbound requests. Downstream services that support W3C trace context can therefore continue the trace. - **Sub-workflows**: A sub-workflow's span uses the parent workflow's span as its parent. - **Resumed workflows**: When a workflow resumes after a wait, the new span links back to the previous span using a span link. ## Enable tracing Set the following environment variables on each n8n instance you want workflow tracing enabled (main, workers, and webhook processors): ``` export N8N_OTEL_ENABLED=true export N8N_OTEL_EXPORTER_OTLP_ENDPOINT=http://:4318 ``` Restart n8n. The instance starts exporting spans over OTLP HTTP using the Protobuf encoding. n8n appends `/v1/traces` to the endpoint by default. Point `N8N_OTEL_EXPORTER_OTLP_ENDPOINT` at the base URL of your collector, not the traces path. If your collector needs authentication, set `N8N_OTEL_EXPORTER_OTLP_HEADERS` to a comma-separated list of `key=value` pairs: ``` export N8N_OTEL_EXPORTER_OTLP_HEADERS="authorization=Bearer ,x-tenant=acme" // For added protection - It is recommended to use the `_FILE` postfix if you are putting a token in here: export N8N_OTEL_EXPORTER_OTLP_HEADERS_FILE=/mnt/otel-headers ``` For the full list of supported variables, refer to [OpenTelemetry environment variables](../../configuration/environment-variables/opentelemetry/). Queue mode In [queue mode](../../scaling/queue-mode/), the OpenTelemetry variables must be set on all instances. Trace context is propagated between instances. ## Sampling By default, n8n exports every trace. To reduce volume in busy instances, set `N8N_OTEL_TRACES_SAMPLE_RATE` to a value between `0` and `1`: ``` # Export 10% of traces export N8N_OTEL_TRACES_SAMPLE_RATE=0.1 ``` n8n uses a trace ID ratio sampler, so the same trace ID is either fully sampled or fully dropped across all spans in the trace. n8n will output a trace for every workflow execution - this includes published workflows, unpublished workflows and test executions - In a future release a toggle will be available to track only published workflows ## Reduce span volume Each node in a workflow produces its own span. For workflows with lots of nodes, this can produce more data than you need. To export only workflow-level spans, set: ``` export N8N_OTEL_TRACES_INCLUDE_NODE_SPANS=false ``` To stop n8n from injecting `traceparent` headers into outbound HTTP requests, set: ``` export N8N_OTEL_TRACES_INJECT_OUTBOUND=false ``` ## Add custom attributes to node spans If you're [building a custom node](../../../integrations/creating-nodes/overview/), you can attach custom key-value pairs to the node's span. Call `setMetadata` from the node's `execute` method: ``` async execute(this: IExecuteFunctions): Promise { this.setMetadata({ tracing: { 'llm.model': 'gpt-4o', 'llm.token.input': 1500, 'llm.token.output': 340, }, }); return [this.getInputData()]; } ``` n8n prefixes each key with `n8n.node.custom.` on the exported span. Values must be strings, numbers, or boolean. This API isn't available from the Code node. It's intended for node authors who want to enrich spans with domain-specific data. ## Try it out with Jaeger You can send traces to a local [Jaeger](https://www.jaegertracing.io/) instance to see them in action. 1. Save the following as `docker-compose.yml`: ``` services: jaeger: image: jaegertracing/jaeger:latest ports: - "16686:16686" # UI - "4317:4317" # OTLP gRPC - "4318:4318" # OTLP HTTP ``` 2. Start Jaeger: ``` docker compose up -d ``` 3. Start n8n with tracing turned on and pointed at Jaeger. Information about [starting n8n](https://github.com/n8n-io/n8n/blob/master/CONTRIBUTING.md) can be found elsewhere in this documentation: ``` N8N_OTEL_ENABLED=true N8N_OTEL_EXPORTER_OTLP_ENDPOINT=http://127.0.0.1:4318 n8n start ``` 5. Run a workflow, then open the Jaeger UI at - Select "n8n" as service" and click "Find traces" to see the OpenTelemetry traces emitted by n8n. ## Span attributes Workflow and node spans include the following n8n-specific attributes. ### Workflow span (`workflow.execute`) | Attribute | Description | | -------------------------- | ---------------------------------------------------------------------- | | `n8n.workflow.id` | Workflow ID. | | `n8n.workflow.name` | Workflow name. | | `n8n.workflow.version_id` | Workflow version ID. | | `n8n.workflow.node_count` | Number of nodes in the workflow. | | `n8n.execution.id` | Execution ID. | | `n8n.execution.mode` | Execution mode (for example, `manual`, `webhook`, `trigger`, `retry`). | | `n8n.execution.status` | Final execution status. | | `n8n.execution.is_retry` | `true` if the execution is a retry. | | `n8n.execution.retry_of` | The original execution ID, when the execution is a retry. | | `n8n.execution.error_type` | Error class name, set when the execution fails. | | `n8n.continuation.reason` | Set on a span link when the workflow resumes after a wait. | ### Node span (`node.execute`) | Attribute | Description | | ----------------------------- | -------------------------------------------------------------------------------------- | | `n8n.node.id` | Node ID. | | `n8n.node.name` | Node name. | | `n8n.node.type` | Node type (for example, `n8n-nodes-base.httpRequest`). | | `n8n.node.type_version` | Node type version. | | `n8n.node.items.input` | Number of input items the node received. | | `n8n.node.items.output` | Number of output items the node produced. | | `n8n.node.termination_reason` | Why a node span ended without a normal completion (for example, `workflow_cancelled`). | | `n8n.node.custom.` | Custom attributes set through `metadata.tracing` in the node output. | When a node fails, n8n records an `exception` event on the span with the standard OpenTelemetry exception attributes (`exception.type`, `exception.message`, `exception.stacktrace`). ## Troubleshooting ### No traces appear in your backend If n8n can't reach the OTLP endpoint at startup, it logs an error: ``` Failed to connect to OpenTelemetry OTLP endpoint during startup ``` Check that: - `N8N_OTEL_ENABLED` is set to `true`. - `N8N_OTEL_EXPORTER_OTLP_ENDPOINT` points at the base URL of the collector (not the `/v1/traces` path). - The collector is reachable from the n8n container or host. - Any required `N8N_OTEL_EXPORTER_OTLP_HEADERS` (such as authentication tokens) are set. n8n logs OpenTelemetry diagnostics at `warn` level by default. Set `N8N_LOG_LEVEL=debug` to see more detail. ### Worker traces are missing parent context In queue mode, workers read the parent trace context from the database. If you only set the OpenTelemetry environment variables on the main instance, worker spans won't link to the parent workflow trace. Set the same variables on every instance type. ## Related resources - [OpenTelemetry environment variables](../../configuration/environment-variables/opentelemetry/) - [W3C Trace Context specification](https://www.w3.org/TR/trace-context/) - [OpenTelemetry Collector documentation](https://opentelemetry.io/docs/collector/) - [Logging in n8n](../logging/) - [Monitoring](../monitoring/) # OEM deployment OEM agreement required OEM deployment of n8n requires a separate commercial agreement with n8n. [Contact n8n](mailto:license@n8n.io) for more information. n8n's OEM deployment option lets you embed and surface n8n's interface inside your own product's UI. This allows your users to build workflows, configure connections, and run workflow automation without leaving your product. n8n branding is required as part of an OEM integration. This is distinct from [using n8n as a backend](../), where workflows execute behind the scenes and end users never see n8n. In that model, your product calls n8n using a webhook or the [API](../../api/) to trigger workflows, and n8n behaves like any other self-hosted service in your infrastructure - your users never see any n8n UI. This is available on all paid plans under the standard license, with no separate agreement needed. OEM deployment is only necessary when you want your users to interact with the n8n editor directly. ## What's covered - [Prerequisites](prerequisites/): Guidance on CPU, memory, and database requirements for planning your deployment. - [Managing workflows](managing-workflows/): Patterns for managing workflows across multiple users or organizations within an embedded deployment. - [Workflow templates](../configuration/configuration-examples/custom-templates/): Configure a custom workflow template library for your users. - [Credential overwrites](../configuration/credential-overwrites/): Set OAuth credentials globally so your users can authenticate without seeing or entering client secrets. ## Support Contact [n8n support](mailto:support@n8n.io) using the email provided when you signed your OEM agreement. The [community forum](https://community.n8n.io/) is also available for general questions. # Managing workflows OEM agreement required OEM deployment of n8n requires a separate commercial agreement with n8n. [Contact n8n](mailto:license@n8n.io) for more information. When managing an n8n OEM deployment spanning across teams or organizations, you will likely need to run the same (or similar) workflows for multiple users. There are two available options for doing so: | Solution | Pros | Cons | | --------------------------------------------------------------------- | ------------------------------------------------------------------ | ---------------------------------------------- | | Create a workflow for each user | No limitation on how workflow starts (can use any trigger) | Requires managing multiple workflows. | | Create a single workflow, and pass it user credentials when executing | Simplified workflow management (only need to change one workflow). | To run the workflow, your product must call it | Warning The APIs referenced in this document are subject to change at any time. Be sure to check for continued functionality with each version upgrade. ## Workflow per user There are three general steps to follow: - Obtain the credentials for each user, and any additional parameters that may be required based on the workflow. - Create the [n8n credentials](../../../glossary/#credential-n8n) for this user. - Create the workflow. ### 1. Obtain user credentials Here you need to capture all credentials for any node/service this user must authenticate with, along with any additional parameters required for the particular workflow. The credentials and any parameters needed will depend on your workflow and what you are trying to do. ### 2. Create user credentials After all relevant credential details have been obtained, you can proceed to create the relevant service credentials in n8n. This can be done using the Editor UI or API call. #### Using the Editor UI 1. From the menu select **Credentials** > **New**. 1. Use the drop-down to select the **Credential type** to create, for example *Airtable*. 1. In the **Create New Credentials** modal, enter the corresponding credentials details for the user, and select the nodes that will have access to these credentials. 1. Click **Create** to finish and save. #### Using the API The frontend API used by the Editor UI can also be called to achieve the same result. The API endpoint is in the format: `https:///rest/credentials`. For example, to create the credentials in the Editor UI example above, the request would be: ``` POST https:///rest/credentials ``` With the request body: ``` { "name":"MyAirtable", "type":"airtableApi", "nodesAccess":[ { "nodeType":"n8n-nodes-base.airtable" } ], "data":{ "apiKey":"q12we34r5t67yu" } } ``` The response will contain the ID of the new credentials, which you will use when creating the workflow for this user: ``` { "data":{ "name":"MyAirtable", "type":"airtableApi", "data":{ "apiKey":"q12we34r5t67yu" }, "nodesAccess":[ { "nodeType":"n8n-nodes-base.airtable", "date":"2021-09-10T07:41:27.770Z" } ], "id":"29", "createdAt":"2021-09-10T07:41:27.777Z", "updatedAt":"2021-09-10T07:41:27.777Z" } } ``` ### 3. Create the workflow Best practice is to have a “base” workflow that you then duplicate and customize for each new user with their credentials (and any other details). You can duplicate and customize your template workflow using either the Editor UI or API call. #### Using the Editor UI 1. From the menu select **Workflows** > **Open** to open the template workflow to be duplicated. 1. Select **Workflows** > **Duplicate**, then enter a name for this new workflow and click **Save**. 1. Update all relevant nodes to use the credentials for this user (created above). 1. **Save** this workflow and set it to **Active** using the toggle in the top-right corner. #### Using the API 1. Fetch the JSON of the template workflow using the endpoint: `https:///rest/workflows/` ``` GET https:///rest/workflows/1012 ``` The response will contain the JSON data of the selected workflow: ``` { "data": { "id": "1012", "name": "Nathan's Workflow", "active": false, "nodes": [ { "parameters": {}, "name": "Start", "type": "n8n-nodes-base.start", "typeVersion": 1, "position": [ 130, 640 ] }, { "parameters": { "authentication": "headerAuth", "url": "https://internal.users.n8n.cloud/webhook/custom-erp", "options": { "splitIntoItems": true }, "headerParametersUi": { "parameter": [ { "name": "unique_id", "value": "recLhLYQbzNSFtHNq" } ] } }, "name": "HTTP Request", "type": "n8n-nodes-base.httpRequest", "typeVersion": 1, "position": [ 430, 300 ], "credentials": { "httpHeaderAuth": "beginner_course" } }, { "parameters": { "operation": "append", "application": "appKBGQfbm6NfW6bv", "table": "processingOrders", "options": {} }, "name": "Airtable", "type": "n8n-nodes-base.airtable", "typeVersion": 1, "position": [ 990, 210 ], "credentials": { "airtableApi": "Airtable" } }, { "parameters": { "conditions": { "string": [ { "value1": "={{$json[\"orderStatus\"]}}", "value2": "processing" } ] } }, "name": "IF", "type": "n8n-nodes-base.if", "typeVersion": 1, "position": [ 630, 300 ] }, { "parameters": { "keepOnlySet": true, "values": { "number": [ { "name": "=orderId", "value": "={{$json[\"orderID\"]}}" } ], "string": [ { "name": "employeeName", "value": "={{$json[\"employeeName\"]}}" } ] }, "options": {} }, "name": "Set", "type": "n8n-nodes-base.set", "typeVersion": 1, "position": [ 800, 210 ] }, { "parameters": { "functionCode": "let totalBooked = items.length;\nlet bookedSum = 0;\n\nfor(let i=0; i < items.length; i++) {\n bookedSum = bookedSum + items[i].json.orderPrice;\n}\nreturn [{json:{totalBooked, bookedSum}}]\n" }, "name": "Function", "type": "n8n-nodes-base.function", "typeVersion": 1, "position": [ 800, 400 ] }, { "parameters": { "webhookUri": "https://discord.com/api/webhooks/865213348202151968/oD5_WPDQwtr22Vjd_82QP3-_4b_lGhAeM7RynQ8Js5DzyXrQEnj0zeAQIA6fki1JLtXE", "text": "=This week we have {{$json[\"totalBooked\"]}} booked orders with a total value of {{$json[\"bookedSum\"]}}. My Unique ID: {{ $(\"HTTP Request\").params.headerParameters.parameters[0].value }}" }, "name": "Discord", "type": "n8n-nodes-base.discord", "typeVersion": 1, "position": [ 1000, 400 ] }, { "parameters": { "triggerTimes": { "item": [ { "mode": "everyWeek", "hour": 9 } ] } }, "name": "Cron", "type": "n8n-nodes-base.cron", "typeVersion": 1, "position": [ 220, 300 ] } ], "connections": { "HTTP Request": { "main": [ [ { "node": "IF", "type": "main", "index": 0 } ] ] }, "Start": { "main": [ [] ] }, "IF": { "main": [ [ { "node": "Set", "type": "main", "index": 0 } ], [ { "node": "Function", "type": "main", "index": 0 } ] ] }, "Set": { "main": [ [ { "node": "Airtable", "type": "main", "index": 0 } ] ] }, "Function": { "main": [ [ { "node": "Discord", "type": "main", "index": 0 } ] ] }, "Cron": { "main": [ [ { "node": "HTTP Request", "type": "main", "index": 0 } ] ] } }, "createdAt": "2021-07-16T11:15:46.066Z", "updatedAt": "2021-07-16T12:05:44.045Z", "settings": {}, "staticData": null, "tags": [] } } ``` 1. Save the returned JSON data and update any relevant credentials and fields for the new user. 1. Create a new workflow using the updated JSON as the request body at endpoint: `https:///rest/workflows` ``` POST https:///rest/workflows/ ``` The response will contain the ID of the new workflow, which you will use in the next step. 1. Lastly, publish the new workflow: ``` PATCH https:///rest/workflows/1012 ``` Passing the additional value `active` in your JSON payload: ``` // ... "active":true, "settings": {}, "staticData": null, "tags": [] ``` ## Single workflow There are four steps to follow to implement this method: - Obtain the credentials for each user, and any additional parameters that may be required based on the workflow. See [Obtain user credentials](#1-obtain-user-credentials) above. - Create the n8n credentials for this user. See [Create user credentials](#2-create-user-credentials) above. - Create the workflow. - Call the workflow as needed. ### Create the workflow The details and scope of this workflow will vary greatly according to the individual use case, however there are a few design implementations to keep in mind: - This workflow must be triggered by a [Webhook](../../../integrations/builtin/core-nodes/n8n-nodes-base.webhook/) node. - The incoming webhook call must contain the user’s credentials and any other workflow parameters required. - Each node where the user’s credentials are needed should use an [expression](../../../data/expressions/) so that the node’s credential field reads the credential provided in the webhook call. - Save and publish the workflow, ensuring the production URL is selected for the Webhook node. Refer to [webhook node](../../../integrations/builtin/core-nodes/n8n-nodes-base.webhook/) for more information. ### Call the workflow For each new user, or for any existing user as may be needed, call the webhook defined as the workflow trigger and provide the necessary credentials (and any other workflow parameters). # Prerequisites The requirements provided here are an example based on n8n Cloud and are for illustrative purposes only. Your requirements may vary depending on the number of users, workflows, and executions. Contact n8n for more information. | Component | Sizing | Supported | | --------- | ---------------------------------------- | --------------------------- | | CPU/vCPU | Minimum 10 CPU cycles, scaling as needed | Any public or private cloud | | Database | 512 MB - 4 GB SSD | SQLite or PostgreSQL | | Memory | 320 MB - 2 GB | | ## CPU considerations n8n isn't CPU intensive so even small instances (of providers such as AWS and GCP) should be enough for most use cases. Usually, memory requirements supersede CPU requirements, so focus resources there when planning your infrastructure. ## Database considerations n8n uses its database to store [credentials](../../../glossary/#credential-n8n), past executions, and workflows. A core feature of n8n is the flexibility to choose a database. All the supported databases have different advantages and disadvantages, which you have to consider individually and pick the one that best suits your needs. By default n8n creates an SQLite database if no database exists at the given location. n8n recommends that every n8n instance have a dedicated database. This helps to prevent dependencies and potential performance degradation. If it isn't possible to provide a dedicated database for every n8n instance, n8n recommends making use of Postgres's schema feature. For Postgres, the database must already exist on the DB-instance. The database user for the n8n process needs to have full permissions on all tables that they're using or creating. n8n creates and maintains the database schema. ### Best practices - SSD storage. - In containerized cloud environments, ensure that the volume is persisted and mounted when stopping/starting a container. If not, all data is lost. - If using Postgres, don't use the `tablePrefix` configuration option. It will be deprecated in the near future. - Pay attention to the changelog of new versions and consider reverting migrations before downgrading. - Set up at least the basic database security and stability mechanisms such as IP allow lists and backups. ## Memory considerations An n8n instance doesn't typically require large amounts of available memory. For example an n8n Cloud instance at idle requires ~100MB. It's the nature of your workflows and the data being processed that determines your memory requirements. For example, while most nodes just pass data to the next node in the workflow, the [Code node](../../../code/code-node/) creates a pre-processing and post-processing copy of the data. When dealing with large binary files, this can consume all available resources. ## Deployment recommendations See the [hosting documentation](../../installation/server-setups/) for detailed setup options. ### User data n8n recommends that you follow the same or similar practices used internally for n8n Cloud: Save user data using [Rook](https://rook.io/) and, if an n8n server goes down, a new instance starts on another machine using the same data. Due to this, you don't need to use backups except in case of a catastrophic failure, or when a user wants to reactivate their account within your prescribed retention period (two weeks for n8n Cloud). ### Backups n8n recommends creating nightly backups by attaching another container, and copying all data to this second container. In this manner, RAM usage is negligible, and so doesn't impact the amount of users you can place on the server. ### Restarting If your instance is down or restarting, missed executions (for example, Cron or Webhook nodes) during this time aren't recoverable. If it's important for you to maintain 100% uptime, you need to build another proxy in front of it which caches the data. # Binary data Binary data is any file-type data, such as image files or documents generated or processed during the execution of a workflow. ## Enable filesystem mode When handling binary data, n8n keeps the data in memory by default. This can cause crashes when working with large files. To avoid this, change the `N8N_DEFAULT_BINARY_DATA_MODE` [environment variable](../../configuration/environment-variables/binary-data/) to `filesystem`. This causes n8n to save data to disk, instead of using memory. If you're using queue mode, switch this to `database`. n8n doesn't support `filesystem` mode with queue mode. ## Binary data pruning n8n executes binary data pruning as part of execution data pruning. Refer to [Execution data | Enable executions pruning](../execution-data/#enable-executions-pruning) for details. If you configure multiple binary data modes, binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. Refer to [External storage](../external-storage/#usage) for details. # Self-hosted concurrency control Only for self-hosted n8n This document is for self-hosted concurrency control. Read [Cloud concurrency](../../../manage-cloud/concurrency/) to learn how concurrency works with n8n Cloud accounts. In regular mode, n8n doesn't limit how many production executions may run at the same time. This can lead to a scenario where too many concurrent executions thrash the event loop, causing performance degradation and unresponsiveness. To prevent this, you can set a concurrency limit for production executions in regular mode. Use this to control how many production executions run concurrently, and queue up any concurrent production executions over the limit. These executions remain in the queue until concurrency capacity frees up, and are then processed in FIFO order. Concurrency control is disabled by default. To enable it: ``` export N8N_CONCURRENCY_PRODUCTION_LIMIT=20 ``` Keep in mind: - Concurrency control applies only to production executions: those started from a webhook or [trigger](../../../glossary/#trigger-node-n8n) node. It doesn't apply to any other kinds, such as manual executions, sub-workflow executions, error executions, or started from CLI. - You can't retry queued executions. Cancelling or deleting a queued execution also removes it from the queue. - On instance startup, n8n resumes queued executions up to the concurrency limit and re-enqueues the rest. - To monitor concurrency control, watch logs for executions being added to the queue and released. In a future version, n8n will show concurrency control in the UI. When you enable concurrency control, you can view the number of active executions and the configured limit at the top of a project's or workflow's executions tab. ## Comparison to queue mode In queue mode, you can control how many jobs a worker may run concurrently using the [`--concurrency` flag](../queue-mode/#configure-worker-concurrency). Concurrency control in queue mode is a separate mechanism from concurrency control in regular mode, but the environment variable `N8N_CONCURRENCY_PRODUCTION_LIMIT` controls both of them. In queue mode, n8n takes the limit from this variable if set to a value other than `-1`, falling back to the `--concurrency` flag or its default. # Execution data Depending on your executions settings and volume, your n8n database can grow in size and run out of storage. To avoid this, n8n recommends that you don't save unnecessary data, and enable pruning of old executions data. To do this, configure the corresponding [environment variables](../../configuration/environment-variables/executions/). ## Reduce saved data Configuration at workflow level You can also configure these settings on an individual workflow basis using the [workflow settings](../../../workflows/settings/). You can select which executions data n8n saves. For example, you can save only executions that result in an `Error`. ``` # npm # Save executions ending in errors export EXECUTIONS_DATA_SAVE_ON_ERROR=all # Don't save successful executions export EXECUTIONS_DATA_SAVE_ON_SUCCESS=none # Don't save node progress for each execution export EXECUTIONS_DATA_SAVE_ON_PROGRESS=false # Don't save manually launched executions export EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false ``` ``` # Docker docker run -it --rm \ --name n8n \ -p 5678:5678 \ -e EXECUTIONS_DATA_SAVE_ON_ERROR=all \ -e EXECUTIONS_DATA_SAVE_ON_SUCCESS=none \ -e EXECUTIONS_DATA_SAVE_ON_PROGRESS=true \ -e EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false \ docker.n8n.io/n8nio/n8n ``` ``` # Docker Compose n8n: environment: - EXECUTIONS_DATA_SAVE_ON_ERROR=all - EXECUTIONS_DATA_SAVE_ON_SUCCESS=none - EXECUTIONS_DATA_SAVE_ON_PROGRESS=true - EXECUTIONS_DATA_SAVE_MANUAL_EXECUTIONS=false ``` ## Enable executions pruning Executions pruning deletes finished executions along with their execution data and binary data on a regular schedule. n8n enables pruning by default. For performance reasons, pruning first marks targets for deletion, and then later permanently removes them. n8n prunes executions when **either** of the following condition occur: - **Age**: The execution finished more than `EXECUTIONS_DATA_MAX_AGE` hours ago (default: 336 hours -> 14 days). - **Count**: The total number of executions exceeds `EXECUTIONS_DATA_PRUNE_MAX_COUNT` (default: 10,000). When this occurs, n8n deletes executions from oldest to newest. Keep in mind: - Executions with the `new`, `running`, or `waiting` status aren't eligible for pruning. - Annotated executions (for example, executions with tags or ratings) are never pruned. - Pruning honors a safety buffer period of `EXECUTIONS_DATA_HARD_DELETE_BUFFER` hours (default: 1h), to ensure recent data remains available while the user is building or debugging a workflow. ``` # Enable executions pruning export EXECUTIONS_DATA_PRUNE=true # How old (hours) a finished execution must be to qualify for soft-deletion export EXECUTIONS_DATA_MAX_AGE=168 # Max number of finished executions to keep. May not strictly prune back down to the exact max count. Set to `0` for unlimited. export EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000 ``` ``` # Docker docker run -it --rm \ --name n8n \ -p 5678:5678 \ -e EXECUTIONS_DATA_PRUNE=true \ -e EXECUTIONS_DATA_MAX_AGE=168 \ docker.n8n.io/n8nio/n8n ``` ``` # Docker Compose n8n: environment: - EXECUTIONS_DATA_PRUNE=true - EXECUTIONS_DATA_MAX_AGE=168 - EXECUTIONS_DATA_PRUNE_MAX_COUNT=50000 ``` SQLite If you run n8n using the default SQLite database, the disk space of any pruned data isn't automatically freed up but rather reused for future executions data. To free up this space configure the `DB_SQLITE_VACUUM_ON_STARTUP` [environment variable](../../configuration/environment-variables/database/#sqlite) or manually run the [VACUUM](https://www.sqlite.org/lang_vacuum.html) operation. Binary data pruning Binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future. # External storage Feature availability - Available on Self-hosted Enterprise plans - If you want access to this feature on Cloud Enterprise, [contact n8n](https://n8n-community.typeform.com/to/y9X2YuGa). n8n can store binary data produced by workflow executions externally. This feature is useful to avoid relying on the filesystem for storing large amounts of binary data. n8n will introduce external storage for other data types in the future. ## Storing n8n's binary data in S3 n8n supports [AWS S3](https://docs.aws.amazon.com/AmazonS3/latest/userguide/Welcome.html) as an external store for binary data produced by workflow executions. You can use other S3-compatible services like Cloudflare R2 and Backblaze B2, but n8n doesn't officially support these. Enterprise-tier feature You will need an [Enterprise license key](../../../license-key/) for external storage. If your license key expires and you remain on S3 mode, the instance will be able to read from, but not write to, the S3 bucket. ### Setup Create and configure a bucket following the [AWS documentation](https://docs.aws.amazon.com/AmazonS3/latest/userguide/creating-bucket.html). You can use the following policy, replacing `` with the name of the bucket you created: ``` { "Version": "2012-10-17", "Statement": [ { "Sid": "VisualEditor0", "Effect": "Allow", "Action": ["s3:*"], "Resource": ["arn:aws:s3:::", "arn:aws:s3:::/*"] } ] } ``` Set a bucket-level lifecycle configuration so that S3 automatically deletes old binary data. n8n delegates pruning of binary data to S3, so setting a lifecycle configuration is required unless you want to preserve binary data indefinitely. Once you finish creating the bucket, you will have a host, bucket name and region, and an access key ID and secret access key. You need to set them in n8n's environment: ``` export N8N_EXTERNAL_STORAGE_S3_HOST=... # example: s3.us-east-1.amazonaws.com export N8N_EXTERNAL_STORAGE_S3_BUCKET_NAME=... export N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION=... export N8N_EXTERNAL_STORAGE_S3_ACCESS_KEY=... export N8N_EXTERNAL_STORAGE_S3_ACCESS_SECRET=... ``` No region If your provider doesn't require a region, you can set `N8N_EXTERNAL_STORAGE_S3_BUCKET_REGION` to `'auto'`. Tell n8n to store binary data in S3: ``` export N8N_AVAILABLE_BINARY_DATA_MODES=filesystem,s3 export N8N_DEFAULT_BINARY_DATA_MODE=s3 ``` Auth autodetection To automatically detect credentials to authenticate your S3 calls, set `N8N_EXTERNAL_STORAGE_S3_AUTH_AUTO_DETECT` to `true`. This will use the default [credential provider chain](https://docs.aws.amazon.com/sdk-for-javascript/v3/developer-guide/setting-credentials-node.html#credchain). Restart the server to load the new configuration. ### Usage After you enable S3, n8n writes and reads any new binary data to and from the S3 bucket. n8n writes binary data to your S3 bucket in this format: ``` workflows/{workflowId}/executions/{executionId}/binary_data/{binaryFileId} ``` n8n continues to read older binary data stored in the filesystem from the filesystem, if `filesystem` remains listed as an option in `N8N_AVAILABLE_BINARY_DATA_MODES`. If you store binary data in S3 and later switch to filesystem mode, the instance continues to read any data stored in S3, as long as `s3` remains listed in `N8N_AVAILABLE_BINARY_DATA_MODES` and your S3 credentials remain valid. Binary data pruning Binary data pruning operates on the active binary data mode. For example, if your instance stored data in S3, and you later switched to filesystem mode, n8n only prunes binary data in the filesystem. This may change in future. # Memory-related errors n8n doesn't restrict the amount of data each node can fetch and process. While this gives you freedom, it can lead to errors when workflow executions require more memory than available. This page explains how to identify and avoid these errors. Only for self-hosted n8n This page describes memory-related errors when [self-hosting n8n](../../). Visit [Cloud data management](../../../manage-cloud/cloud-data-management/) to learn about memory limits for [n8n Cloud](../../../manage-cloud/overview/). ## Identifying out of memory situations n8n provides error messages that warn you in some out of memory situations. For example, messages such as **Execution stopped at this node (n8n may have run out of memory while executing it)**. Error messages including **Problem running workflow**, **Connection Lost**, or **503 Service Temporarily Unavailable** suggest that an n8n instance has become unavailable. When self-hosting n8n, you may also see error messages such as **Allocation failed - JavaScript heap out of memory** in your server logs. On n8n Cloud, or when using n8n's Docker image, n8n restarts automatically when encountering such an issue. However, when running n8n with npm you might need to restart it manually. ## Typical causes Such problems occur when a workflow execution requires more memory than available to an n8n instance. Factors increasing the memory usage for a workflow execution include: - Amount of [JSON data](../../../data/data-structure/). - Size of binary data. - Number of nodes in a workflow. - Some nodes are memory-heavy: the [Code](../../../integrations/builtin/core-nodes/n8n-nodes-base.code/) node and the older Function node can increase memory consumption significantly. - Manual or automatic workflow executions: manual executions increase memory consumption as n8n makes a copy of the data for the frontend. - Additional workflows running at the same time. ## Avoiding out of memory situations When encountering an out of memory situation, there are two options: either increase the amount of memory available to n8n or reduce the memory consumption. ### Increase available memory When self-hosting n8n, increasing the amount of memory available to n8n means provisioning your n8n instance with more memory. This may incur additional costs with your hosting provider. On n8n cloud you need to upgrade to a larger plan. ### Reduce memory consumption This approach is more complex and means re-building the workflows causing the issue. This section provides some guidelines on how to reduce memory consumption. Not all suggestions are applicable to all workflows. - Split the data processed into smaller chunks. For example, instead of fetching 10,000 rows with each execution, process 200 rows with each execution. - Avoid using the Code node where possible. - Avoid manual executions when processing larger amounts of data. - Split the workflow up into sub-workflows and ensure each sub-workflow returns a limited amount of data to its parent workflow. Splitting the workflow might seem counter-intuitive at first as it usually requires adding at least two more nodes: the [Loop Over Items](../../../integrations/builtin/core-nodes/n8n-nodes-base.splitinbatches/) node to split up the items into smaller batches and the [Execute Workflow](../../../integrations/builtin/core-nodes/n8n-nodes-base.executeworkflow/) node to start the sub-workflow. However, as long as your sub-workflow does the heavy lifting for each batch and then returns only a small result set to the main workflow, this reduces memory consumption. This is because the sub-workflow only holds the data for the current batch in memory, after which the memory is free again. ### Increase old memory This applies to self-hosting n8n. When encountering **JavaScript heap out of memory** errors, it's often useful to allocate additional memory to the old memory section of the V8 JavaScript engine. To do this, set the appropriate [V8 option](https://nodejs.org/api/cli.html#--max-old-space-sizesize-in-megabytes) `--max-old-space-size=SIZE` either through the CLI or through the `NODE_OPTIONS` [environment variable](https://nodejs.org/api/cli.html#node_optionsoptions). # Scaling n8n When running n8n at scale, with a large number of users, workflows, or executions, you need to change your n8n configuration to ensure good performance. n8n can run in different [modes](../queue-mode/) depending on your needs. The `queue` mode provides the best scalability. Refer to [Queue mode](../queue-mode/) for configuration details. You can configure data saving and pruning to improve database performance. Refer to [Execution data](../execution-data/) for details. # Performance and benchmarking n8n can handle up to 220 workflow executions per second on a single instance, with the ability to scale up further by adding more instances. This document outlines n8n's performance benchmarking. It describes the factors that affect performance, and includes two example benchmarks. ## Performance factors The performance of n8n depends on factors including: - The workflow type - The resources available to n8n - How you configure n8n's scaling options ## Run your own benchmarking To get an accurate estimate for your use case, run n8n's [benchmarking framework](https://github.com/n8n-io/n8n/tree/master/packages/%40n8n/benchmark). The repository contains more information about the benchmarking. ## Example: Single instance performance This test measures how response time increases as requests per second increase. It looks at the response time when calling the Webhook Trigger node. Setup: - Hardware: ECS c5a.large instance (4GB RAM) - n8n setup: Single n8n instance (running in main mode, with Postgres database) - Workflow: Webhook Trigger node, Edit Fields node This graph shows the percentage of requests to the Webhook Trigger node getting a response within 100 seconds, and how that varies with load. Under higher loads n8n usually still processes the data, but takes over 100s to respond. ## Example: Multi-instance performance This test measures how response time increases as requests per second increase. It looks at the response time when calling the Webhook Trigger node. Setup: - Hardware: seven ECS c5a.4xlarge instances (8GB RAM each) - n8n setup: two webhook instances, four worker instances, one database instance (MySQL), one main instance running n8n and Redis - Workflow: Webhook Trigger node, Edit Fields node - Multi-instance setups use [Queue mode](../queue-mode/) This graph shows the percentage of requests to the Webhook Trigger node getting a response within 100 seconds, and how that varies with load. Under higher loads n8n usually still processes the data, but takes over 100s to respond. # Queue mode You can run n8n in different modes depending on your needs. The queue mode provides the best scalability. Binary data storage n8n doesn't support queue mode with binary data storage in filesystem. If your workflows need to persist binary data in queue mode, you can use [S3 external storage](../external-storage/). ## How it works When running in queue mode, you have multiple n8n instances set up, with one main instance receiving workflow information (such as triggers) and the worker instances performing the executions. Each worker is its own Node.js instance, running in `main` mode, but able to handle multiple simultaneous workflow executions due to their high IOPS (input-output operations per second). By using worker instances and running in queue mode, you can scale n8n up (by adding workers) and down (by removing workers) as needed to handle the workload. This is the process flow: 1. The main n8n instance handles timers and webhook calls, generating (but not running) a workflow execution. 1. It passes the execution ID to a message broker, [Redis](#start-redis), which maintains the queue of pending executions and allows the next available worker to pick them up. 1. A worker in the pool picks up message from Redis. 1. The worker uses the execution ID to get workflow information from the database. 1. After completing the workflow execution, the worker: - Writes the results to the database. - Posts to Redis, saying that the execution has finished. 1. Redis notifies the main instance. ## Configuring workers Workers are n8n instances that do the actual work. They receive information from the main n8n process about the workflows that have to get executed, execute the workflows, and update the status after each execution is complete. Per-process event log files If your workers share a writable filesystem, give each worker process a unique event log path. Refer to [Per-process event log files](../../../log-streaming/#per-process-event-log-files) for details. ### Set encryption key n8n automatically generates an encryption key upon first startup. You can also provide your own custom key using [environment variable](../../configuration/environment-variables/) if desired. The encryption key of the main n8n instance must be shared with all worker and webhooks processor nodes to ensure these worker nodes are able to access credentials stored in the database. Set the encryption key for each worker node in a [configuration file](../../configuration/configuration-methods/) or by setting the corresponding environment variable: ``` export N8N_ENCRYPTION_KEY= ``` ### Set executions mode Database considerations n8n recommends using Postgres 13+. Running n8n with execution mode set to `queue` with an SQLite database isn't recommended. Set the environment variable `EXECUTIONS_MODE` to `queue` on the main instance and any workers using the following command. ``` export EXECUTIONS_MODE=queue ``` Alternatively, you can set `executions.mode` to `queue` in the [configuration file](../../configuration/environment-variables/). ### Start Redis Running Redis on a separate machine You can run Redis on a separate machine, just make sure that it's accessible by the n8n instance. To run Redis in a Docker container, follow the instructions below: Run the following command to start a Redis instance: ``` docker run --name some-redis -p 6379:6379 -d redis ``` By default, Redis runs on `localhost` on port `6379` with no password. Based on your Redis configuration, set the following configurations for the main n8n process. These will allow n8n to interact with Redis. | Using configuration file | Using environment variables | Description | | --------------------------------- | --------------------------------- | ----------------------------------------------------------------------------------------- | | `queue.bull.redis.host:localhost` | `QUEUE_BULL_REDIS_HOST=localhost` | By default, Redis runs on `localhost`. | | `queue.bull.redis.port:6379` | `QUEUE_BULL_REDIS_PORT=6379` | The default port is `6379`. If Redis is running on a different port, configure the value. | You can also set the following optional configurations: | Using configuration file | Using environment variables | Description | | ------------------------------------------- | ------------------------------------ | ----------------------------------------------------------------------------------------------------------------------------------- | | `queue.bull.redis.username:USERNAME` | `QUEUE_BULL_REDIS_USERNAME` | By default, Redis doesn't require a username. If you're using a specific user, configure it variable. | | `queue.bull.redis.password:PASSWORD` | `QUEUE_BULL_REDIS_PASSWORD` | By default, Redis doesn't require a password. If you're using a password, configure it variable. | | `queue.bull.redis.db:0` | `QUEUE_BULL_REDIS_DB` | The default value is `0`. If you change this value, update the configuration. | | `queue.bull.redis.timeoutThreshold:10000ms` | `QUEUE_BULL_REDIS_TIMEOUT_THRESHOLD` | Tells n8n how long it should wait if Redis is unavailable before exiting. The default value is `10000` (ms). | | `queue.bull.gracefulShutdownTimeout:30` | `N8N_GRACEFUL_SHUTDOWN_TIMEOUT` | A graceful shutdown timeout for workers to finish executing jobs before terminating the process. The default value is `30` seconds. | Now you can start your n8n instance and it will connect to your Redis instance. ### Start workers You will need to start worker processes to allow n8n to execute workflows. If you want to host workers on a separate machine, install n8n on the machine and make sure that it's connected to your Redis instance and the n8n database. Start worker processes by running the following command from the root directory: ``` ./packages/cli/bin/n8n worker ``` If you're using Docker, use the following command: ``` docker run --name n8n-queue -p 5679:5678 docker.n8n.io/n8nio/n8n worker ``` You can set up multiple worker processes. Make sure that all the worker processes have access to Redis and the n8n database. #### Worker server Each worker process runs a server that exposes optional endpoints: - `/healthz`: returns whether the worker is up, if you enable the `QUEUE_HEALTH_CHECK_ACTIVE` environment variable - `/healthz/readiness`: returns whether worker's DB and Redis connections are ready, if you enable the `QUEUE_HEALTH_CHECK_ACTIVE` environment variable - [credentials overwrite endpoint](../../configuration/credential-overwrites/) - [`/metrics`](../../configuration/configuration-examples/prometheus/) Customizing health check endpoints You can customize the health check endpoint path using the [`N8N_ENDPOINT_HEALTH`](../../configuration/environment-variables/endpoints/) environment variable. #### View running workers Feature availability - Available on Self-hosted Enterprise plans. - If you want access to this feature on Cloud Enterprise, [contact n8n](https://n8n-community.typeform.com/to/y9X2YuGa). You can view running workers and their performance metrics in n8n by selecting **Settings** > **Workers**. ## Running n8n with queues When running n8n with queues, all the production workflow executions get processed by worker processes. For webhooks, this means the HTTP request is received by the main/webhook process, but the actual workflow execution is passed to a worker, which can add some overhead and latency. Redis acts as the message broker, and the database persists data, so access to both is required. Running a distributed system with this setup over SQLite isn't supported. Migrate data If you want to migrate data from one database to another, you can use the Export and Import commands. Refer to the [CLI commands for n8n](../../cli-commands/#export-workflows-and-credentials) documentation to learn how to use these commands. ## Webhook processors Keep in mind Webhook processes rely on Redis and need the `EXECUTIONS_MODE` environment variable set too. Follow the [configure the workers](#configuring-workers) section above to setup webhook processor nodes. Webhook processors are another layer of scaling in n8n. Configuring the webhook processor is optional, and allows you to scale the incoming webhook requests. This method allows n8n to process a huge number of parallel requests. All you have to do is add more webhook processes and workers accordingly. The webhook process will listen to requests on the same port (default: `5678`). Run these processes in containers or separate machines, and have a load balancing system to route requests accordingly. n8n doesn't recommend adding the main process to the load balancer pool. If you add the main process to the pool, it will receive requests and possibly a heavy load. This will result in degraded performance for editing, viewing, and interacting with the n8n UI. You can start the webhook processor by executing the following command from the root directory: ``` ./packages/cli/bin/n8n webhook ``` If you're using Docker, use the following command: ``` docker run --name n8n-queue -p 5679:5678 -e "EXECUTIONS_MODE=queue" docker.n8n.io/n8nio/n8n webhook ``` ### Configure webhook URL To configure your webhook URL, execute the following command on the machine running the main n8n instance: ``` export WEBHOOK_URL=https://your-webhook-url.com ``` You can also set this value in the configuration file. ### Configure load balancer When using multiple webhook processes you will need a load balancer to route requests. If you are using the same domain name for your n8n instance and the webhooks, you can set up your load balancer to route requests as follows: - Redirect webhook triggers to the webhook servers pool. Paths to consider: - `/webhook/*`: Webhook trigger node endpoints - `/webhook-waiting/*`: Human-in-the-loop webhook endpoints used by nodes that perform "send and wait" operations (for example, the Slack node). - All other paths (the n8n internal API, the static files for the editor, etc.) should get routed to the main process **Note:** The default URL for manual workflow executions is `/webhook-test/*`. Make sure that these URLs route to your main process. You can change this path in the configuration file `endpoints.webhook` or using the `N8N_ENDPOINT_WEBHOOK` environment variable. If you change these, update your load balancer accordingly. ### Disable webhook processing in the main process (optional) You have webhook processors to execute the workflows. You can disable the webhook processing in the main process. This will make sure to execute all webhook executions in the webhook processors. In the configuration file set `endpoints.disableProductionWebhooksOnMainProcess` to `true` so that n8n doesn't process webhook requests on the main process. Alternatively, you can use the following command: ``` export N8N_DISABLE_PRODUCTION_MAIN_PROCESS=true ``` When disabling the webhook process in the main process, run the main process and don't add it to the load balancer's webhook pool. ## Configure worker concurrency You can define the number of jobs a worker can run in parallel by using the `concurrency` flag. It defaults to `10`. To change it: ``` n8n worker --concurrency=5 ``` ## Concurrency and scaling recommendations n8n recommends setting concurrency to 5 or higher for your worker instances. Setting low concurrency values with a large numbers of workers can exhaust your database's connection pool, leading to processing delays and failures. ## Multi-main setup Feature availability - Available on Self-hosted Enterprise plans. In queue mode you can run more than one `main` process for high availability. In a single-mode setup, the `main` process does two sets of tasks: - **regular tasks**, such as running the API, serving the UI, and listening for webhooks, and - **at-most-once tasks**, such as running non-HTTP triggers (timers, pollers, and persistent connections like RabbitMQ and IMAP), and pruning executions and binary data. In a multi-main setup, there are two kinds of `main` processes: - **followers**, which run **regular tasks**, and - the **leader**, which runs **both regular and at-most-once tasks**. ### Leader designation In a multi-main setup, all main instances handle the leadership process transparently to users. In case the current leader becomes unavailable, for example because it crashed or its event loop became too busy, other followers can take over. If the previous leader becomes responsive again, it becomes a follower. ### Configuring multi-main setup To deploy n8n in multi-main setup, ensure: - All `main` processes are running in queue mode and are connected to Postgres and Redis. - All `main` and `worker` processes are running the same version of n8n. - All `main` processes have set the environment variable `N8N_MULTI_MAIN_SETUP_ENABLED` to `true`. - All `main` processes are running behind a load balancer with session persistence (sticky sessions) enabled. If needed, you can adjust the leader key options: | Using configuration file | Using environment variables | Description | | --------------------------- | --------------------------------------- | ------------------------------------------------------------- | | `multiMainSetup.ttl:10` | `N8N_MULTI_MAIN_SETUP_KEY_TTL=10` | Time to live (in seconds) for leader key in multi-main setup. | | `multiMainSetup.interval:3` | `N8N_MULTI_MAIN_SETUP_CHECK_INTERVAL=3` | Interval (in seconds) for leader check in multi-main setup. | # Block access to nodes For security reasons, you may want to block your users from accessing or working with specific n8n nodes. This is helpful if your users might be untrustworthy. Use the `NODES_EXCLUDE` environment variable to prevent your users from accessing specific nodes. ## Exclude nodes Update your `NODES_EXCLUDE` environment variable to include an array of strings containing any nodes you want to block your users from using. For example, setting the variable this way: ``` NODES_EXCLUDE: "[\"n8n-nodes-base.executeCommand\", \"n8n-nodes-base.readWriteFile\"]" ``` Blocks the [Execute Command](../../../integrations/builtin/core-nodes/n8n-nodes-base.executecommand/) and [Read/Write Files from Disk](../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/) nodes. Your n8n users won't be able to search for or use these nodes. ## Suggested nodes to block The nodes that can pose security risks vary based on your use case and user profile. Here are some nodes you might want to start with: - [Execute Command](../../../integrations/builtin/core-nodes/n8n-nodes-base.executecommand/) - [Read/Write Files from Disk](../../../integrations/builtin/core-nodes/n8n-nodes-base.readwritefile/) ## Enable nodes that are blocked by default Some nodes, like Execute Command, are blocked by default. Remove them from the exclude list to enable them: ``` NODES_EXCLUDE: "[]" ``` ## Related resources Refer to [Nodes environment variables](../../configuration/environment-variables/nodes/) for more information on this environment variable. Refer to [Configuration](../../configuration/configuration-methods/) for more information on setting environment variables. # Disable the public REST API The [n8n public REST API](../../../api/) allows you to programmatically perform many of the same tasks as you can in the n8n GUI. If you don't plan on using this API, n8n recommends disabling it to improve the security of your n8n installation. To disable the [public REST API](../../../api/), set the `N8N_PUBLIC_API_DISABLED` environment variable to `true`, for example: ``` export N8N_PUBLIC_API_DISABLED=true ``` ## Disable the API playground To disable the [API playground](../../../api/using-api-playground/), set the `N8N_PUBLIC_API_SWAGGERUI_DISABLED` environment variable to `true`, for example: ``` export N8N_PUBLIC_API_SWAGGERUI_DISABLED=true ``` ## Related resources Refer to [Deployment environment variables](../../configuration/environment-variables/deployment/) for more information on these environment variables. Refer to [Configuration](../../configuration/configuration-methods/) for more information on setting environment variables. # Encryption key rotation Feature availability - Available on self-hosted n8n instances only. - You need to be the instance owner to enable the feature and rotate keys. Encryption key rotation lets you periodically replace the key that encrypts your n8n data, like credentials, OAuth tokens, and other sensitive content, without changing your instance's master encryption key. ## How encryption key rotation works n8n uses a two-layer key model: - **Instance encryption key** (`N8N_ENCRYPTION_KEY`): your master key, set at deployment time. This key never changes. n8n uses it only to protect the data encryption keys. - **Data encryption key**: the key that directly encrypts your credential data. This is the key you rotate. n8n stores it encrypted in the database, protected by the instance key. When you rotate, n8n generates a new data encryption key and uses it for all future writes. Existing data encrypted with the previous key remains readable. n8n silently re-encrypts each record to the new key the next time you update it. ## Before you begin Take a full database backup before enabling this feature Enabling encryption key rotation is a one-way change. There's no rollback path. See [Backwards compatibility and rollback](#backwards-compatibility-and-rollback) for details. You also need to make sure that: - All n8n instances, main and all workers, share the same `N8N_ENCRYPTION_KEY` value. - You have direct control over your environment variables and your n8n database. This is only possible on self-hosted deployments. ## Enable encryption key rotation 1. Set the following environment variable on **all** n8n instances, both main and workers: ``` N8N_ENV_FEAT_ENCRYPTION_KEY_ROTATION=true ``` 1. Restart all instances. On startup, n8n automatically generates the initial data encryption key and stores it, encrypted, in your database. 1. To confirm the feature is active, go to **Settings** > **Data Encryption Keys**. You should see the active key listed. ## Rotate the active key Once the feature is enabled, you can rotate to a new data encryption key at any time. ### Using the UI Go to **Settings** > **Data Encryption Keys** and select **Rotate key**. ### Using the API Make a `POST` call to the `/encryption/keys` endpoint. The request requires the `encryptionKey:manage` global scope. n8n never returns key material in API responses, only metadata such as the ID, algorithm, status, and timestamps. After rotation, n8n uses the new active key for all new writes. Records encrypted with previous keys remain readable. n8n re-encrypts them to the new key the next time you update each record. ## Backwards compatibility and rollback This is a one-way migration Read this section carefully before enabling encryption key rotation. Once you enable encryption key rotation, n8n begins writing credentials and other sensitive data in a new format that includes a key identifier. Older versions of n8n, and instances running without the feature flag, can't read this format. - **Don't disable the feature flag** after any data has been written in the new format. Removing `N8N_ENV_FEAT_ENCRYPTION_KEY_ROTATION` or setting it to `false` makes all data encrypted after you enabled the feature permanently inaccessible. - **Don't downgrade your n8n version** after enabling. Older versions can't decrypt the new format. There's no automated tool to convert data encrypted in the new format back to the legacy format. The only recovery path is restoring from a database backup taken before you enabled the feature. ## Recommended steps 1. **Back up your database**. Take a full snapshot before any changes. 1. **Enable on staging first**. Set `N8N_ENV_FEAT_ENCRYPTION_KEY_ROTATION=true` on a non-production environment, restart, and verify that credentials still decrypt correctly. 1. **Enable on production**. Only do this after validating staging behavior. 1. **Don't disable or downgrade**. Once production data has been written in the new format, keep the flag enabled and stay on the same or a newer n8n version. ## Related resources - [Set a custom encryption key](../../configuration/configuration-examples/encryption-key/): set the instance-level `N8N_ENCRYPTION_KEY` value. - [Deployment environment variables](../../configuration/environment-variables/deployment/): reference for `N8N_ENCRYPTION_KEY` and `N8N_ENV_FEAT_ENCRYPTION_KEY_ROTATION`. - [Configuring queue mode](../../scaling/queue-mode/): make sure all workers share the same instance encryption key. # Hardening task runners [Task runners](../../configuration/task-runners/) are responsible for executing code from the [Code node](../../../integrations/builtin/core-nodes/n8n-nodes-base.code/). While Code node executions are secure, you can follow these recommendations to further harden your task runners. ## Run task runners as sidecars in external mode To increase the isolation between the core n8n process and code in the Code node, run task runners in [external mode](../../configuration/task-runners/#setting-up-external-mode). External task runners launch as separate containers, providing a fully isolated environment to execute the JavaScript defined in the Code node. ## Use the distroless image For a reduced attack surface, use the distroless Docker image variant. Distroless images contain only the application and its runtime dependencies, excluding package managers, shells, and other utilities that aren't needed at runtime. To use the distroless image, append the `-distroless` suffix to the Docker tag. For example: `2.4.6-distroless`. ## Run as the nobody user For improved security, configure task runners to run as the unprivileged `nobody` user with user and group ID 65532. This prevents the container process from running with root privileges and limits potential damage from security vulnerabilities. ## Configure read-only root filesystem Configure a [read-only root filesystem](https://kubernetes.io/docs/tasks/configure-pod-container/security-context/) to prevent any modifications to the container's filesystem at runtime. This helps protect against malicious code that might attempt to modify system files. Task runners still require some temporary storage for operation. To accommodate this, mount a minimal `emptyDir` volume to `/tmp`. If your workflows require more temporary space, increase the size of the volume accordingly. # Securing n8n Securing your n8n instance can take several forms. At a high level, you can: - Conduct a [security audit](../security-audit/) to identify security risks. - [Set up SSL](../set-up-ssl/) to enforce secure connections. - [Set up Single Sign-On](../set-up-sso/) for user account management. - Use [two-factor authentication (2FA)](../../../user-management/two-factor-auth/) for your users. - Enable [encryption key rotation](../encryption-key-rotation/) to periodically replace the key that encrypts credentials and other sensitive data. You can also protect sensitive data processed by your workflows: - [Redact execution data](../../../workflows/executions/execution-data-redaction/) to hide input and output data from workflow executions. More granularly, consider blocking or opting out of features or data collection you don't want: - [Disable the public API](../disable-public-api/) if you aren't using it. - [Opt out of data collection](../telemetry-opt-out/) of the anonymous data n8n collects automatically. - [Block certain nodes](../blocking-nodes/) from being available to your users. - [Protect against SSRF attacks](../ssrf-protection/) to control which hosts and IP ranges workflow nodes can connect to. - [Restrict account registration](../restrict-by-email-verification/) to email-verified users. # Restrict account registration to email-verified users You can require all new accounts to be verified by email. This prevents malicious admins from registering accounts without email verification. ## Prerequisites - SMTP must be set up and n8n must be able to send emails. ## How to restrict account registration Set the environment variable `N8N_INVITE_LINKS_EMAIL_ONLY` to `true`. This locks down your instance so that only users with verified email addresses can register. For more details on configuring SMTP, see [Set up SMTP](../../configuration/user-management-self-hosted/#step-one-smtp). # Security audit You can run a security audit on your n8n instance, to detect common security issues. ## Run an audit You can run an audit using the CLI, the public API, or the n8n node. ### CLI Run `n8n audit`. ### API Make a `POST` call to the `/audit` endpoint. You must authenticate as the instance owner. ### n8n node Add the [n8n node](../../../integrations/builtin/core-nodes/n8n-nodes-base.n8n/) to your workflow. Select **Resource** > **Audit** and **Operation** > **Generate**. ## Report contents The audit generates five risk reports: ### Credentials This report shows: - Credentials not used in a workflow. - Credentials not used in an active workflow. - Credentials not use in a recently active workflow. ### Database This report shows: - Expressions used in **Execute Query** fields in SQL nodes. - Expressions used in **Query Parameters** fields in SQL nodes. - Unused **Query Parameters** fields in SQL nodes. ### File system This report lists nodes that interact with the file system. ### Nodes This report shows: - Official risky nodes. These are n8n built in nodes. You can use them to fetch and run any code on the host system, which exposes the instance to exploits. You can view the list in [n8n code | Audit constants](https://github.com/n8n-io/n8n/blob/master/packages/cli/src/security-audit/constants.ts#L51), under `OFFICIAL_RISKY_NODE_TYPES`. - Community nodes. - Custom nodes. ### Instance This report shows: - Unprotected webhooks in the instance. - Missing security settings - If your instance is outdated. # Set up SSL There are two methods to support TLS/SSL in n8n. ## Use a reverse proxy (recommended) Use a reverse proxy like [Traefik](https://doc.traefik.io/traefik/) or a Network Load Balancer (NLB) in front of the n8n instance. This should also take care of certificate renewals. Refer to [Security | Data encryption](https://n8n.io/legal/#security) for more information. ## Pass certificates into n8n directly You can also choose to pass certificates into n8n directly. To do so, set the `N8N_SSL_CERT` and `N8N_SSL_KEY` environment variables to point to your generated certificate and key file. You'll need to make sure the certificate stays renewed and up to date. Refer to [Deployment environment variables](../../configuration/environment-variables/deployment/) for more information on these variables and [Configuration](../../configuration/configuration-methods/) for more information on setting environment variables. # Set up Single Sign-On (SSO) Feature availability - Available on Business and Enterprise plans. - You need to be an instance owner or admin to enable and configure SAML or OIDC. n8n supports the SAML and OIDC authentication protocols for single sign-on (SSO). See [OIDC vs SAML](https://www.onelogin.com/learn/oidc-vs-saml) for more general information on the two protocols, the differences between them, and their respective benefits. - [Set up SAML](../../../user-management/saml/setup/): a general guide to setting up SAML in n8n, and links to resources for common identity providers (IdPs). - [Set up OIDC](../../../user-management/oidc/setup/): a general guide to setting up OpenID Connect (OIDC) SSO in n8n. ## Configure SSO with environment variables You can also configure SSO from environment variables instead of through the UI. Available from n8n v2.18.0. See [SSO environment variables](../../configuration/environment-variables/sso/) for the full list of variables, and [Manage instance settings using environment variables](../../configuration/settings-env-vars/) for how the activation pattern works. # SSRF protection Available since 2.12.0 Server-Side Request Forgery (SSRF) attacks abuse workflow nodes to make requests to internal network resources, cloud metadata endpoints, or localhost services that shouldn't be accessible. Warning SSRF protection is an additional application-level defense. You should always configure network-level protections (firewalls, security groups, network policies) on your infrastructure as your primary line of defense. n8n's SSRF protection adds defense-in-depth on top of those controls. ## Enable SSRF protection ``` N8N_SSRF_PROTECTION_ENABLED=true ``` When enabled, n8n validates all outbound HTTP requests from user-controllable nodes (such as the HTTP Request node) against the configured blocked and allowed ranges. This includes redirect targets and DNS resolution to prevent bypass techniques like DNS rebinding. ## Default blocked ranges When SSRF protection is enabled, the following IP ranges are blocked by default: | Range | Description | | ------------------------------------------------------------------------------------------------- | -------------------------- | | `10.0.0.0/8`, `172.16.0.0/12`, `192.168.0.0/16` | RFC 1918 private addresses | | `127.0.0.0/8`, `::1/128` | Loopback | | `169.254.0.0/16`, `fe80::/10` | Link-local | | `fc00::/7`, `fd00::/8` | IPv6 unique local | | `0.0.0.0/8`, `192.0.0.0/24`, `192.0.2.0/24`, `198.18.0.0/15`, `198.51.100.0/24`, `203.0.113.0/24` | Reserved/special purpose | You can extend this list with `N8N_SSRF_BLOCKED_IP_RANGES=default,100.0.0.0/8`. ## Allow access to internal services If your workflows need to reach legitimate internal services, use allowlists. Allowlists take precedence over blocklists, following this order: hostname allowlist > IP allowlist > IP blocklist. Allow by hostname pattern (supports wildcards like `*.n8n.internal`): ``` N8N_SSRF_ALLOWED_HOSTNAMES=*.n8n.internal,*.company.local ``` Allow by IP range: ``` N8N_SSRF_ALLOWED_IP_RANGES=10.0.1.0/24,10.0.2.50/32 ``` Warning Only allowlist hostnames within your control (internal DNS zones). Hostname allowlists bypass IP blocklist checks. ## Related resources Refer to [SSRF protection environment variables](../../configuration/environment-variables/ssrf-protection/) for the full list of configuration options. Refer to [Configuration methods](../../configuration/configuration-methods/) for more information on setting environment variables. # Data collection n8n collects some anonymous data from self-hosted n8n installations. Use the instructions below to opt out of data telemetry collection. ## Collected data Refer to [Privacy | Data collection in self-hosted n8n](../../../privacy-security/privacy/#data-collection-in-self-hosted-n8n) for details on the data n8n collects. ## How collection works Your n8n instance sends most data to n8n as the events that generate it occur. Workflow execution counts and an instance pulse are sent periodically (every 6 hours). These data types mostly fall into n8n telemetry collection. ## Opting out of data collection n8n enables telemetry collection by default. To disable it, configure the following environment variables. ### Opt out of telemetry events To opt out of telemetry events, set the `N8N_DIAGNOSTICS_ENABLED` environment variable to false, for example: ``` export N8N_DIAGNOSTICS_ENABLED=false ``` ### Opt out of checking for new versions of n8n To opt out of checking for new versions of n8n, set the `N8N_VERSION_NOTIFICATIONS_ENABLED` environment variable to false, for example: ``` export N8N_VERSION_NOTIFICATIONS_ENABLED=false ``` ## Disable all connection to n8n servers If you want to fully prevent all communication with n8n's servers, refer to [Isolate n8n](../../configuration/configuration-examples/isolation/). ## Related resources Refer to [Deployment environment variables](../../configuration/environment-variables/deployment/) for more information on these environment variables. Refer to [Configuration](../../configuration/configuration-methods/) for more information on setting environment variables. # Self-hosted AI Starter Kit The Self-hosted AI Starter Kit is an open, docker compose template that bootstraps a fully featured Local AI and Low Code development environment. Curated by [n8n](https://github.com/n8n-io), it combines the self-hosted n8n platform with a list of compatible AI products and components to get you started building self-hosted AI workflows. ## What’s included ✅ [**Self-hosted n8n**](../../): Low-code platform with over 400 integrations and advanced AI components. ✅ [**Ollama**](https://ollama.com/): Cross-platform LLM platform to install and run the latest local LLMs. ✅ [**Qdrant**](https://qdrant.tech/): Open-source, high performance vector store with a comprehensive API. ✅ [**PostgreSQL**](https://www.postgresql.org/): The workhorse of the Data Engineering world, handles large amounts of data safely. ## What you can build ⭐️ [AI Agents](../../../glossary/#ai-agent) that can schedule appointments ⭐️ Summaries of company PDFs without leaking data ⭐️ Smarter Slackbots for company communications and IT-ops ⭐️ Private, low-cost analyses of financial documents ## Get the kit Head to [the GitHub repository](https://github.com/n8n-io/self-hosted-ai-starter-kit) to clone the repo and get started! For testing only n8n designed this kit to help you get started with self-hosted AI workflows. While it’s not fully optimized for production environments, it combines robust components that work well together for proof-of-concept projects. Customize it to meet your needs. Secure and harden it before using in production. # Integrations n8n calls integrations nodes. Nodes are the building blocks of workflows in n8n. They're an entry point for retrieving data, a function to process data, or an exit for sending data. The data process includes filtering, recomposing, and changing data. There can be one or several nodes for your API, service or app. You can connect multiple nodes, which allows you to create complex workflows. ## Built-in nodes n8n includes a collection of built-in integrations. Refer to [Built-in nodes](builtin/node-types/) for documentation on all n8n's built-in nodes. ## Community nodes As well as using the built-in nodes, you can also install community-built nodes. Refer to [Community nodes](community-nodes/installation/) for more information. ## Credential-only nodes and custom operations One of the most complex parts of setting up [API](../glossary/#api) calls is managing authentication. n8n provides [credentials](../glossary/#credential-n8n) support for operations and services beyond those supported by built-in nodes. - Custom operations for existing nodes: n8n supplies hundreds of nodes to create workflows that link multiple products. However, some nodes don't include all the possible operations supported by a product's API. You can work around this by making a custom API call using the [HTTP Request](builtin/core-nodes/n8n-nodes-base.httprequest/) node. - Credential-only nodes: n8n includes credential-only nodes. These are integrations where n8n supports setting up credentials for use in the HTTP Request node, but doesn't provide a standalone node. You can find a credential-only node in the nodes panel, as you would for any other integration. Refer to [Custom operations](custom-operations/) for more information. ## Generic integrations If you need to connect to a service where n8n doesn't have a node, or a credential-only node, you can still use the [HTTP Request](builtin/core-nodes/n8n-nodes-base.httprequest/) node. Refer to the node page for details on how to set up authentication and create your API call. ## Where to go next - If you want to create your own node, head over to the [Creating Nodes](creating-nodes/overview/) section. - Check out [Community nodes](community-nodes/usage/) to learn about installing and managing community-built nodes. - If you'd like to learn more about the different nodes in n8n, their functionalities and example usage, check out n8n's node libraries: [Core nodes](builtin/core-nodes/), [Actions](builtin/app-nodes/), and [Triggers](builtin/trigger-nodes/). - If you'd like to learn how to add the credentials for the different nodes, head over to the [Credentials](builtin/credentials/) section. # Custom API operations One of the most complex parts of setting up [API](../../glossary/#api) calls is managing authentication. n8n provides [credentials](../../glossary/#credential-n8n) support for operations and services beyond those supported by built-in nodes. - Custom operations for existing nodes: n8n supplies hundreds of nodes to create workflows that link multiple products. However, some nodes don't include all the possible operations supported by a product's API. You can work around this by making a custom API call using the [HTTP Request](../builtin/core-nodes/n8n-nodes-base.httprequest/) node. - Credential-only nodes: n8n includes credential-only nodes. These are integrations where n8n supports setting up credentials for use in the HTTP Request node, but doesn't provide a standalone node. You can find a credential-only node in the nodes panel, as you would for any other integration. ## Predefined credential types A predefined credential type is a credential that already exists in n8n. You can use predefined credential types instead of generic credentials in the HTTP Request node. For example: you create an Asana credential, for use with the Asana node. Later, you want to perform an operation that isn't supported by the Asana node, using Asana's API. You can use your existing Asana credential in the HTTP Request node to perform the operation, without additional authentication setup. ### Using predefined credential types To use a predefined credential type: 1. Open your HTTP Request node, or add a new one to your workflow. 1. In **Authentication**, select **Predefined Credential Type**. 1. In **Credential Type**, select the API you want to use. 1. In **Credential for ``**, you can: 1. Select an existing credential for that platform, if available. 1. Select **Create New** to create a new credential. ### Credential scopes Some existing credential types have specific scopes: endpoints that they work with. n8n warns you about this when you select the credential type. For example, follow the steps in [Using predefined credential types](#using-predefined-credential-types), and select **Google Calendar OAuth2 API** as your **Credential Type**. n8n displays a box listing the two endpoints you can use this credential type with: # Deprecated and versioned nodes n8n improves its node library over time. This page lists removed nodes (fully removed), deprecated nodes (retired but still functional), and versioned nodes (active with multiple versions available). ## Deprecated nodes n8n won't release further updates or bug fixes for deprecated nodes. Existing workflows that use them continue to run, but you should migrate to a supported alternative. Migrate deprecated nodes Replace deprecated nodes in your workflows before n8n removes them in a future release. | Node | Final node version | | ------------------------------ | ------------------ | | Binary Input Loader | 1 | | Chat Messages Retriever | 1 | | Convert to/from binary data | 1.1 | | Cron | 1 | | Embedding Dimensions | 1 | | Function | 1 | | Function Item | 1 | | GitHub Document Loader | 1.1 | | HTML Extract | 1 | | HTTP Request Tool | 1.1 | | iCalendar | 1 | | In Memory Vector Store Insert | 1 | | In Memory Vector Store Load | 1 | | Interval | 1 | | JSON Input Loader | 1 | | Manual Chat Trigger | 1.1 | | MCP Registry Client (internal) | 1 | | Message an Agent | 1 | | Motorhead | 1.4 | | OpenAI Assistant | 1.1 | | OpenAI Model | 1 | | Options | 1 | | Orbit | 1 | | Pinecone: Insert | 1 | | Pinecone: Load | 1 | | Read Binary File | 1 | | Read Binary Files | 1 | | Read PDF | 1 | | SerpApi (Google Search) | 1 | | Simulate | 1 | | Simulate Trigger | 1 | | Supabase: Insert | 1 | | Supabase: Load | 1 | | Tool Executor | 1 | | Workflow Trigger | 1 | | Write Binary File | 1 | | Zep | 1.4 | | Zep Vector Store: Insert | 1 | | Zep Vector Store: Load | 1 | ## Removed nodes n8n removes nodes when the external service they connect to is no longer available. Workflows that use a removed node will fail. Update or remove affected workflows If your workflows use any of these nodes, update them to use an alternative or remove them to avoid errors. | Node | n8n version | | --------- | ----------- | | Automizy | 2.0 | | crowd.dev | 2.0 | | Kitemaker | 2.0 | | Spontit | 2.0 | ## Versioned nodes When n8n makes significant improvements to a node, n8n releases a new default version and keeps older versions available. Existing workflows using an older version continue to work unchanged. Always use the current version in new workflows to get the latest features and bug fixes. | Node | Current node version | Previous node versions | | --------------------------------- | -------------------- | ------------------------------------------------------------------- | | AI Agent | 3.1 | 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9, 2, 2.1, 2.2, 2.3, 3 | | AI Agent Tool | 3 | 2.2 | | Airtable | 2.2 | 1, 2, 2.1 | | Airtop | 1.1 | 1 | | Anthropic Chat Model | 1.5 | 1, 1.1, 1.2, 1.3, 1.4 | | AWS Bedrock Chat Model | 1.1 | 1 | | AwsS3 | 2 | 1 | | Baserow | 1.1 | 1 | | Basic LLM Chain | 1.9 | 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8 | | Bitbucket Trigger | 1.1 | 1 | | Cal.com Trigger | 2 | 1 | | Call n8n Sub-Workflow Tool | 2.2 | 1, 1.1, 1.2, 1.3, 2, 2.1 | | Chat | 1.3 | 1, 1.1, 1.2 | | Chat Memory Manager | 1.1 | 1 | | Chat Trigger | 1.4 | 1, 1.1, 1.2, 1.3 | | Coda | 1.1 | 1 | | Code | 2 | 1 | | Code Tool | 1.3 | 1, 1.1, 1.2 | | Compare Datasets | 2.3 | 1, 2, 2.1, 2.2 | | Compression | 1.1 | 1 | | Convert to File | 1.1 | 1 | | Crypto | 2 | 1 | | Data table | 1.1 | 1 | | Date & Time | 2 | 1 | | Default Data Loader | 1.1 | 1 | | Discord | 2 | 1 | | Email Trigger (IMAP) | 2.1 | 1, 2 | | Embeddings OpenAI | 1.2 | 1, 1.1 | | Execute Sub-workflow | 1.3 | 1, 1.1, 1.2 | | Execute Workflow Trigger | 1.1 | 1 | | Execution Data | 1.1 | 1 | | Extract from File | 1.1 | 1 | | Filter | 2.3 | 1, 2, 2.1, 2.2 | | Git | 1.1 | 1 | | GitHub | 1.1 | 1 | | Gmail | 2.2 | 1, 2, 2.1 | | Gmail Trigger | 1.4 | 1, 1.1, 1.2, 1.3 | | Google Analytics | 2 | 1 | | Google BigQuery | 2.1 | 1, 2 | | Google Books | 2 | 1 | | Google Calendar | 1.3 | 1, 1.1, 1.2 | | Google Cloud Firestore | 1.1 | 1 | | Google Docs | 2 | 1 | | Google Drive | 3 | 1, 2 | | Google Gemini Chat Model | 1.1 | 1 | | Google Sheets | 4.7 | 1, 2, 3, 4, 4.1, 4.2, 4.3, 4.4, 4.5, 4.6 | | Google Slides | 2 | 1 | | Google Translate | 2 | 1 | | GraphQL | 1.1 | 1 | | Guardrails | 2 | 1 | | HighLevel | 2 | 1 | | HTML | 1.2 | 1, 1.1 | | HTTP Request | 4.4 | 1, 2, 3, 4, 4.1, 4.2, 4.3 | | HubSpot | 2.2 | 1, 2, 2.1 | | If | 2.3 | 1, 2, 2.1, 2.2 | | Information Extractor | 1.2 | 1, 1.1 | | Invoice Ninja | 2 | 1 | | Invoice Ninja Trigger | 2 | 1 | | Item Lists | 3.1 | 1, 2, 2.1, 2.2, 3 | | Jira Trigger | 1.1 | 1 | | Kafka Trigger | 1.3 | 1, 1.1, 1.2 | | Lemlist | 2 | 1 | | Linear | 1.1 | 1 | | MailerLite | 2 | 1 | | MailerLite Trigger | 2 | 1 | | MCP Client Tool | 1.2 | 1, 1.1 | | MCP Server Trigger | 2 | 1, 1.1 | | Merge | 3.2 | 1, 2, 2.1, 3, 3.1 | | Microsoft Agent 365 Trigger | 1.1 | 1 | | Microsoft Excel 365 | 2.2 | 1, 2, 2.1 | | Microsoft OneDrive | 1.1 | 1 | | Microsoft Outlook | 2 | 1 | | Microsoft SQL | 1.1 | 1 | | Microsoft Teams | 2 | 1, 1.1 | | Mindee | 3 | 1, 2 | | MongoDB | 1.3 | 1, 1.1, 1.2 | | MongoDB Chat Memory | 1.1 | 1 | | Moonshot Kimi Chat Model | 1.1 | 1 | | MySQL | 2.5 | 1, 2, 2.1, 2.2, 2.3, 2.4 | | n8n Form | 2.5 | 1, 2.3, 2.4 | | n8n Form Trigger | 2.5 | 1, 2, 2.1, 2.2, 2.3, 2.4 | | NocoDB | 4 | 1, 2, 3 | | Notion | 2.2 | 1, 2, 2.1 | | OpenAI | 2.3 | 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 2, 2.1, 2.2 | | OpenAI Chat Model | 1.3 | 1, 1.1, 1.2 | | Perplexity | 2 | 1 | | Pipedrive | 2 | 1 | | Pipedrive Trigger | 1.1 | 1 | | Postgres | 2.6 | 1, 2, 2.1, 2.2, 2.3, 2.4, 2.5 | | Postgres Chat Memory | 1.4 | 1, 1.1, 1.2, 1.3 | | Question and Answer Chain | 1.7 | 1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6 | | RabbitMQ | 1.1 | 1 | | Read/Write Files from Disk | 1.1 | 1 | | Redis Chat Memory | 1.6 | 1, 1.1, 1.2, 1.3, 1.4, 1.5 | | Remove Duplicates | 2 | 1, 1.1 | | Respond to Webhook | 1.5 | 1, 1.1, 1.2, 1.3, 1.4 | | RSS Read | 1.2 | 1, 1.1 | | Schedule Trigger | 1.3 | 1, 1.1, 1.2 | | SeaTable | 2 | 1 | | SeaTable Trigger | 2 | 1 | | Send Email | 2.1 | 1, 2 | | Sentiment Analysis | 1.1 | 1 | | Set | 3.4 | 1, 2, 3, 3.1, 3.2, 3.3 | | Simple Memory | 1.4 | 1, 1.1, 1.2, 1.3 | | Slack | 2.4 | 1, 2, 2.1, 2.2, 2.3 | | Split In Batches | 3 | 2 | | Splunk | 2 | 1 | | Spreadsheet File | 2 | 1 | | Strava | 1.1 | 1 | | Structured Output Parser | 1.3 | 1, 1.1, 1.2 | | Summarization Chain | 2.1 | 1, 2 | | Summarize | 1.1 | 1 | | Switch | 3.4 | 1, 2, 3, 3.1, 3.2, 3.3 | | Telegram | 1.2 | 1, 1.1 | | Telegram Trigger | 1.3 | 1, 1.1, 1.2 | | Text Classifier | 1.1 | 1 | | TheHive Trigger | 2 | 1 | | Think Tool | 1.1 | 1 | | Todoist | 2.2 | 1, 2, 2.1 | | Typeform Trigger | 1.1 | 1 | | Vector Store Question Answer Tool | 1.1 | 1 | | Wait | 1.1 | 1 | | Webflow | 2 | 1 | | Webflow Trigger | 2 | 1 | | Webhook | 2.1 | 1, 1.1, 2 | | WhatsApp Business Cloud | 1.1 | 1 | | Workflow Retriever | 1.1 | 1 | | X (Formerly Twitter) | 2 | 1 | | Xata | 1.5 | 1, 1.1, 1.2, 1.3, 1.4 | # Built-in integrations This section contains the [node](../../../glossary/#node-n8n) library: reference documentation for every built-in node in n8n, and their credentials. ## Node operations: Triggers and Actions When you add a node to a workflow, n8n displays a list of available operations. An operation is something a node does, such as getting or sending data. There are two types of operation: - Triggers start a workflow in response to specific events or conditions in your services. When you select a Trigger, n8n adds a trigger node to your workflow, with the Trigger operation you chose pre-selected. When you search for a node in n8n, Trigger operations have a bolt icon . - Actions are operations that represent specific tasks within a workflow, which you can use to manipulate data, perform operations on external systems, and trigger events in other systems as part of your workflows. When you select an Action, n8n adds a node to your workflow, with the Action operation you chose pre-selected. ## Core nodes Core nodes can be actions or [triggers](../../../glossary/#trigger-node-n8n). Whereas most nodes connect to a specific external service, core nodes provide functionality such as logic, scheduling, or generic API calls. ## Cluster nodes [Cluster nodes](../../../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../../../glossary/#root-node-n8n) and one or more [sub-nodes](../../../glossary/#sub-node-n8n) that extend the functionality of the node. ## Credentials External services need a way to identify and authenticate users. This data can range from an API key over an email/password combination to a long multi-line private key. You can save these in n8n as [credentials](../../../glossary/#credential-n8n). Nodes in n8n can then request that credential information. As another layer of security, only node types with specific access rights can access the credentials. To make sure that the data is secure, it gets saved to the database encrypted. n8n uses a random personal encryption key, which it automatically generates on the first run of n8n and then saved under `~/.n8n/config`. To learn more about creating, managing, and sharing credentials, refer to [Manage credentials](../../../credentials/). ## Community nodes n8n supports custom nodes built by the community. Refer to [Community nodes](../../community-nodes/installation/) for guidance on installing and using these nodes. For help building your own custom nodes, and publish them to [npm](https://www.npmjs.com/), refer to [Creating nodes](../../creating-nodes/overview/) for more information. # Handling API rate limits [API](../../../glossary/#api) rate limits are restrictions on request frequency. For example, an API may limit the number of requests you can make per minute, or per day. APIs can also limits how much data you can send in one request, or how much data the API sends in a single response. ## Identify rate limit issues When an n8n node hits a rate limit, it errors. n8n displays the error message in the node output panel. This includes the error message from the service. If n8n received error 429 (too many requests) from the service, the error message is **The service is receiving too many requests from you**. To check the rate limits for the service you're using, refer to the API documentation for the service. ## Handle rate limits for integrations There are two ways to handle rate limits in n8n's integrations: using the Retry On Fail setting, or using a combination of the [Loop Over Items](../core-nodes/n8n-nodes-base.splitinbatches/) and [Wait](../core-nodes/n8n-nodes-base.wait/) nodes: - Retry On Fail adds a pause between API request attempts. - With Loop Over Items and Wait you can break you request data into smaller chunks, as well as pausing between requests. ### Enable Retry On Fail When you enable Retry On Fail, the node automatically tries the request again if it fails the first time. 1. Open the node. 1. Select **Settings**. 1. Enable the **Retry On Fail** toggle. 1. Configure the retry settings: if using this to work around rate limits, set **Wait Between Tries (ms)** to more than the rate limit. For example, if the API you're using allows one request per second, set **Wait Between Tries (ms)** to `1000` to allow a 1 second wait. ### Use Loop Over Items and Wait Use the Loop Over Items node to batch the input items, and the Wait node to introduce a pause between each request. 1. Add the Loop Over Items node before the node that calls the API. Refer to [Loop Over Items](../core-nodes/n8n-nodes-base.splitinbatches/) for information on how to configure the node. 1. Add the Wait node after the node that calls the API, and connect it back to the Loop Over Items node. Refer to [Wait](../core-nodes/n8n-nodes-base.wait/) for information on how to configure the node. For example, to handle rate limits when using OpenAI: ## Handle rate limits in the HTTP Request node The HTTP Request node has built-in settings for handling rate limits and large amounts of data. ### Batch requests Use the Batching option to send more than one request, reducing the request size, and introducing a pause between requests. This is the equivalent of using Loop Over Items and Wait. 1. In the HTTP Request node, select **Add Option** > **Batching**. 1. Set **Items per Batch**: this is the number of input items to include in each request. 1. Set **Batch Interval (ms)** to introduce a delay between requests. For example, if the API you're using allows one request per second, set **Wait Between Tries (ms)** to `1000` to allow a 1 second wait. ### Paginate results APIs paginate their results when they need to send more data than they can handle in a single response. For more information on pagination in the HTTP Request node, refer to [HTTP Request node | Pagination](../core-nodes/n8n-nodes-base.httprequest/#pagination). # Actions library This section provides information about n8n's Actions. # Action Network node Use the Action Network node to automate work in Action Network, and integrate Action Network with other applications. n8n has built-in support for a wide range of Action Network features, including creating, updating, and deleting events, people, tags, and signatures. On this page, you'll find a list of operations the Action Network node supports, and links to more resources. Credentials Refer to [Action Network credentials](../../credentials/actionnetwork/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Attendance - Create - Get - Get All - Event - Create - Get - Get All - Person - Create - Get - Get All - Update - Person Tag - Add - Remove - Petition - Create - Get - Get All - Update - Signature - Create - Get - Get All - Update - Tag - Create - Get - Get All ## Templates and examples [Browse Action Network integration templates](https://n8n.io/integrations/action-network/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # ActiveCampaign node Use the ActiveCampaign node to automate work in ActiveCampaign, and integrate ActiveCampaign with other applications. n8n has built-in support for a wide range of ActiveCampaign features, including creating, getting, updating, and deleting accounts, contact, orders, e-commerce customers, connections, lists, tags, and deals. On this page, you'll find a list of operations the ActiveCampaign node supports and links to more resources. Credentials Refer to [ActiveCampaign credentials](../../credentials/activecampaign/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Account - Create an account - Delete an account - Get data of an account - Get data of all accounts - Update an account - Account Contact - Create an association - Delete an association - Update an association - Contact - Create a contact - Delete a contact - Get data of a contact - Get data of all contact - Update a contact - Contact List - Add contact to a list - Remove contact from a list - Contact Tag - Add a tag to a contact - Remove a tag from a contact - Connection - Create a connection - Delete a connection - Get data of a connection - Get data of all connections - Update a connection - Deal - Create a deal - Delete a deal - Get data of a deal - Get data of all deals - Update a deal - Create a deal note - Update a deal note - E-commerce Order - Create a order - Delete a order - Get data of a order - Get data of all orders - Update a order - E-Commerce Customer - Create a E-commerce Customer - Delete a E-commerce Customer - Get data of a E-commerce Customer - Get data of all E-commerce Customer - Update a E-commerce Customer - E-commerce Order Products - Get data of all order products - Get data of a ordered product - Get data of an order's products - List - Get all lists - Tag - Create a tag - Delete a tag - Get data of a tag - Get data of all tags - Update a tag ## Templates and examples **Create a contact in ActiveCampaign** by tanaypant [View template details](https://n8n.io/workflows/412-create-a-contact-in-activecampaign/) **Receive updates when a new account is added by an admin in ActiveCampaign** by tanaypant [View template details](https://n8n.io/workflows/488-receive-updates-when-a-new-account-is-added-by-an-admin-in-activecampaign/) **🛠️ ActiveCampaign Tool MCP Server 💪 all 48 operations** by David Ashby [View template details](https://n8n.io/workflows/5336-activecampaign-tool-mcp-server-all-48-operations/) [Browse ActiveCampaign integration templates](https://n8n.io/integrations/activecampaign/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Adalo node Use the Adalo node to automate work in Adalo, and integrate Adalo with other applications. n8n has built-in support for a wide range of Adalo features, including like creating, getting, updating and deleting databases, records, and collections. On this page, you'll find a list of operations the Adalo node supports and links to more resources. Credentials Refer to [Adalo credentials](../../credentials/adalo/) for guidance on setting up authentication. ## Operations - Collection - Create - Delete - Get - Get Many - Update ## Templates and examples [Browse Adalo integration templates](https://n8n.io/integrations/adalo/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Adalo's documentation](https://help.adalo.com/) for more information on using Adalo. Their [External Collections with APIs](https://help.adalo.com/integrations/external-collections-with-apis) page gives more detail about what you can do with Adalo collections. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Affinity node Use the Affinity node to automate work in Affinity, and integrate Affinity with other applications. n8n has built-in support for a wide range of Affinity features, including creating, getting, updating and deleting lists, entries, organization, and persons. On this page, you'll find a list of operations the Affinity node supports and links to more resources. Credentials Refer to [Affinity credentials](../../credentials/affinity/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - List - Get a list - Get all lists - List Entry - Create a list entry - Delete a list entry - Get a list entry - Get all list entries - Organization - Create an organization - Delete an organization - Get an organization - Get all organizations - Update an organization - Person - Create a person - Delete a person - Get a person - Get all persons - Update a person ## Templates and examples **Create an organization in Affinity** by tanaypant [View template details](https://n8n.io/workflows/476-create-an-organization-in-affinity/) **Receive updates when a new list is created in Affinity** by Harshil Agrawal [View template details](https://n8n.io/workflows/672-receive-updates-when-a-new-list-is-created-in-affinity/) **🛠️ Affinity Tool MCP Server 💪 all 16 operations** by David Ashby [View template details](https://n8n.io/workflows/5335-affinity-tool-mcp-server-all-16-operations/) [Browse Affinity integration templates](https://n8n.io/integrations/affinity/), or [search all templates](https://n8n.io/workflows/) # Agile CRM node Use the Agile CRM node to automate work in Agile CRM, and integrate Agile CRM with other applications. n8n has built-in support for a wide range of Agile CRM features, including creating, getting, updating and deleting companies, contracts, and deals. On this page, you'll find a list of operations the Agile CRM node supports and links to more resources. Credentials Refer to [Agile CRM credentials](../../credentials/agilecrm/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Company - Create a new company - Delete a company - Get a company - Get all companies - Update company properties - Contact - Create a new contact - Delete a contact - Get a contact - Get all contacts - Update contact properties - Deal - Create a new deal - Delete a deal - Get a deal - Get all deals - Update deal properties ## Templates and examples [Browse Agile CRM integration templates](https://n8n.io/integrations/agile-crm/), or [search all templates](https://n8n.io/workflows/) # Airtop node Use the Airtop node to automate work in Airtop, and integrate Airtop with other applications. n8n has built-in support for a wide range of Airtop features, enabling you to control a cloud-based web browser for tasks like querying, scraping, and interacting with web pages. On this page, you'll find a list of operations the Airtop node supports, and links to more resources. Credentials Refer to [Airtop credentials](../../credentials/airtop/) for guidance on setting up authentication. ## Operations - Session - Create session - Save profile on termination - Terminate session - Window - Create a new browser window - Load URL - Take screenshot - Close window - Extraction - Query page - Query page with pagination - Smart scrape page - Interaction - Click an element - Hover on an element - Type ## Templates and examples **Automated LinkedIn Profile Discovery with Airtop and Google Search** by Airtop [View template details](https://n8n.io/workflows/3477-automated-linkedin-profile-discovery-with-airtop-and-google-search/) **Automate Web Interactions with Claude 3.5 Haiku and Airtop Browser Agent** by Airtop [View template details](https://n8n.io/workflows/3592-automate-web-interactions-with-claude-35-haiku-and-airtop-browser-agent/) **Web Site Scraper for LLMs with Airtop** by Airtop [View template details](https://n8n.io/workflows/4252-web-site-scraper-for-llms-with-airtop/) [Browse Airtop integration templates](https://n8n.io/integrations/airtop/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Airtop's documentation](https://docs.airtop.ai/api-reference/airtop-api) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. Contact [Airtop's Support](https://docs.airtop.ai/guides/misc/support) for assistance or to create a feature request. ## Node reference ### Create a session and window Create an Airtop browser session to get a **Session ID**, then use it to create a new browser window. After this, you can use any extraction or interaction operation. ### Extract content Extract content from a web browser using these operations: - **Query page**: Extract information from the current window. - **Query page with pagination**: Extract information from pages with pagination or infinite scrolling. - **Smart scrape page**: Get the window content as markdown. Get JSON responses by using the **JSON Output Schema** parameter in query operations. ### Interacting with pages Click, hover, or type on elements by describing the element you want to interact with. ### Terminate a session End your session to save resources. Sessions are automatically terminated based on the **Idle Timeout** set in the **Create Session** operation or can be manually terminated using the **Terminate Session** operation. # AMQP Sender node Use the AMQP Sender node to automate work in AMQP Sender, and integrate AMQP Sender with other applications. n8n has built-in support for a wide range of AMQP Sender features, including sending messages. On this page, you'll find a list of operations the AMQP Sender node supports and links to more resources. Credentials Refer to [AMQP Sender credentials](../../credentials/amqp/) for guidance on setting up authentication. ## Operations - Send message ## Templates and examples [Browse AMQP Sender integration templates](https://n8n.io/integrations/amqp-sender/), or [search all templates](https://n8n.io/workflows/) # APITemplate.io node Use the APITemplate.io node to automate work in APITemplate.io, and integrate APITemplate.io with other applications. n8n has built-in support for a wide range of APITemplate.io features, including getting and creating accounts and PDF. On this page, you'll find a list of operations the APITemplate.io node supports and links to more resources. Credentials Refer to [APITemplate.io credentials](../../credentials/apitemplateio/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Account - Get - Image - Create - PDF - Create ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Create an invoice based on the Typeform submission** by Harshil Agrawal [View template details](https://n8n.io/workflows/989-create-an-invoice-based-on-the-typeform-submission/) **Generate Dynamic Images with Text & Templates using ImageKit.** by Ahmed Alnaqa [View template details](https://n8n.io/workflows/3519-generate-dynamic-images-with-text-and-templates-using-imagekit/) [Browse APITemplate.io integration templates](https://n8n.io/integrations/apitemplateio/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Asana node Use the Asana node to automate work in Asana, and integrate Asana with other applications. n8n has built-in support for a wide range of Asana features, including creating, updating, deleting, and getting users, tasks, projects, and subtasks. On this page, you'll find a list of operations the Asana node supports and links to more resources. Credentials Refer to [Asana credentials](../../credentials/asana/) for guidance on setting up authentication. Update to 1.22.2 or above Due to changes in Asana's API, some operations in this node stopped working on 17th January 2023. Upgrade to n8n 1.22.2 or above. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Project - Create a new project - Delete a project - Get a project - Get all projects - Update a project - Subtask - Create a subtask - Get all subtasks - Task - Create a task - Delete a task - Get a task - Get all tasks - Move a task - Search for tasks - Update a task - Task Comment - Add a comment to a task - Remove a comment from a task - Task Tag - Add a tag to a task - Remove a tag from a task - Task Project - Add a task to a project - Remove a task from a project - User - Get a user - Get all users ## Templates and examples **Automated Customer Service Ticket Creation & Notifications with Asana & WhatsApp** by Bela [View template details](https://n8n.io/workflows/2237-automated-customer-service-ticket-creation-and-notifications-with-asana-and-whatsapp/) **Sync tasks data between Notion and Asana** by n8n Team [View template details](https://n8n.io/workflows/1769-sync-tasks-data-between-notion-and-asana/) **Receive updates when an event occurs in Asana** by Harshil Agrawal [View template details](https://n8n.io/workflows/654-receive-updates-when-an-event-occurs-in-asana/) [Browse Asana integration templates](https://n8n.io/integrations/asana/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Autopilot node Use the Autopilot node to automate work in Autopilot, and integrate Autopilot with other applications. n8n has built-in support for a wide range of Autopilot features, including creating, deleting, and updating contacts, as well as adding contacts to a list. On this page, you'll find a list of operations the Autopilot node supports and links to more resources. Autopilot branding change Autopilot has become Ortto. The Autopilot credentials and nodes are only compatible with Autopilot, not the new Ortto API. Credentials Refer to [Autopilot credentials](../../credentials/autopilot/) for guidance on setting up authentication. ## Operations - Contact - Create/Update a contact - Delete a contact - Get a contact - Get all contacts - Contact Journey - Add contact to list - Contact List - Add contact to list - Check if contact is on list - Get all contacts on list - Remove a contact from a list - List - Create a list - Get all lists ## Templates and examples **Viral ASMR Video Factory: Automatically generate viral videos on autopilot.** by Abdellah Homrani [View template details](https://n8n.io/workflows/5324-viral-asmr-video-factory-automatically-generate-viral-videos-on-autopilot/) **Manage contacts via Autopilot** by Harshil Agrawal [View template details](https://n8n.io/workflows/990-manage-contacts-via-autopilot/) **Add new contacts from Autopilot to Airtable** by Harshil Agrawal [View template details](https://n8n.io/workflows/991-add-new-contacts-from-autopilot-to-airtable/) [Browse Autopilot integration templates](https://n8n.io/integrations/autopilot/), or [search all templates](https://n8n.io/workflows/) # AWS Certificate Manager node Use the AWS Certificate Manager node to automate work in AWS Certificate Manager, and integrate AWS Certificate Manager with other applications. n8n has built-in support for a wide range of AWS Certificate Manager features, including creating, deleting, getting, and renewing SSL certificates. On this page, you'll find a list of operations the AWS Certificate Manager node supports and links to more resources. Credentials Refer to [AWS Certificate Manager credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations - Certificate - Delete - Get - Get Many - Get Metadata - Renew ## Templates and examples [Browse AWS Certificate Manager integration templates](https://n8n.io/integrations/aws-certificate-manager/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [AWS Certificate Manager's documentation](https://docs.aws.amazon.com/acm/latest/userguide/acm-overview.html) for more information on this service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Cognito node Use the AWS Cognito node to automate work in AWS Cognito and integrate AWS Cognito with other applications. n8n has built-in support for a wide range of AWS Cognito features, which includes creating, retrieving, updating, and deleting groups, users, and user pools. On this page, you'll find a list of operations the AWS Cognito node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/aws/). ## Operations - Group: - Create: Create a new group. - Delete: Delete an existing group. - Get: Retrieve details about an existing group. - Get Many: Retrieve a list of groups. - Update: Update an existing group. - User: - Add to Group: Add an existing user to a group. - Create: Create a new user. - Delete: Delete a user. - Get: Retrieve information about an existing user. - Get Many: Retrieve a list of users. - Remove From Group: Remove a user from a group. - Update: Update an existing user. - User Pool: - Get: Retrieve information about an existing user pool. ## Templates and examples **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) **Extract and store text from chat images using AWS S3** by Lorena [View template details](https://n8n.io/workflows/1393-extract-and-store-text-from-chat-images-using-aws-s3/) **Sync data between Google Drive and AWS S3** by Lorena [View template details](https://n8n.io/workflows/1396-sync-data-between-google-drive-and-aws-s3/) [Browse AWS Cognito integration templates](https://n8n.io/integrations/aws-cognito/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [AWS Cognito's documentation](https://docs.aws.amazon.com/cognito/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Comprehend node Use the AWS Comprehend node to automate work in AWS Comprehend, and integrate AWS Comprehend with other applications. n8n has built-in support for a wide range of AWS Comprehend features, including identifying and analyzing texts. On this page, you'll find a list of operations the AWS Comprehend node supports and links to more resources. Credentials Refer to [AWS Comprehend credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations **Text** - Identify the dominant language - Analyse the sentiment of the text ## Templates and examples [Browse AWS Comprehend integration templates](https://n8n.io/integrations/aws-comprehend/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS DynamoDB node Use the AWS DynamoDB node to automate work in AWS DynamoDB, and integrate AWS DynamoDB with other applications. n8n has built-in support for a wide range of AWS DynamoDB features, including creating, reading, updating, deleting items, and records on a database. On this page, you'll find a list of operations the AWS DynamoDB node supports and links to more resources. Credentials Refer to [AWS credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations - Item - Create a new record, or update the current one if it already exists (upsert/put) - Delete an item - Get an item - Get all items ## Templates and examples [Browse AWS DynamoDB integration templates](https://n8n.io/integrations/aws-dynamodb/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Elastic Load Balancing node Use the AWS Elastic Load Balancing node to automate work in AWS ELB, and integrate AWS ELB with other applications. n8n has built-in support for a wide range of AWS ELB features, including adding, getting, removing, deleting certificates and load balancers. On this page, you'll find a list of operations the AWS ELB node supports and links to more resources. Credentials Refer to [AWS ELB credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations - Listener Certificate - Add - Get Many - Remove - Load Balancer - Create - Delete - Get - Get Many This node supports creating and managing application and network load balancers. It doesn't currently support gateway load balancers. ## Templates and examples **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) **Extract and store text from chat images using AWS S3** by Lorena [View template details](https://n8n.io/workflows/1393-extract-and-store-text-from-chat-images-using-aws-s3/) **Sync data between Google Drive and AWS S3** by Lorena [View template details](https://n8n.io/workflows/1396-sync-data-between-google-drive-and-aws-s3/) [Browse AWS Elastic Load Balancing integration templates](https://n8n.io/integrations/aws-elb/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [AWS ELB's documentation](https://docs.aws.amazon.com/elasticloadbalancing/latest/userguide/what-is-load-balancing.html) for more information on this service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS IAM node Use the AWS IAM node to automate work in AWS Identity and Access Management (IAM) and integrate AWS IAM with other applications. n8n has built-in support for a wide range of AWS IAM features, which includes creating, updating, getting and deleting users and groups as well as managing group membership. On this page, you'll find a list of operations the AWS IAM node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/aws/). ## Operations - **User**: - **Add to Group**: Add an existing user to a group. - **Create**: Create a new user. - **Delete**: Delete a user. - **Get**: Retrieve a user. - **Get Many**: Retrieve a list of users. - **Remove From Group**: Remove a user from a group. - **Update**: Update an existing user. - **Group**: - **Create**: Create a new group. - **Delete**: Create a new group. - **Get**: Retrieve a group. - **Get Many**: Retrieve a list of groups. - **Update**: Update an existing group. ## Templates and examples **Automated GitHub Scanner for Exposed AWS IAM Keys** by Niranjan G [View template details](https://n8n.io/workflows/5021-automated-github-scanner-for-exposed-aws-iam-keys/) **Automated AWS IAM Key Compromise Response with Slack & Claude AI** by Niranjan G [View template details](https://n8n.io/workflows/5123-automated-aws-iam-key-compromise-response-with-slack-and-claude-ai/) **Send Slack Alerts for AWS IAM Access Keys Older Than 365 Days** by Trung Tran [View template details](https://n8n.io/workflows/7501-send-slack-alerts-for-aws-iam-access-keys-older-than-365-days/) [Browse AWS IAM integration templates](https://n8n.io/integrations/aws-iam/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to the [AWS IAM documentation](https://docs.aws.amazon.com/IAM/latest/APIReference/welcome.html) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Lambda node Use the AWS Lambda node to automate work in AWS Lambda, and integrate AWS Lambda with other applications. n8n has built-in support for a wide range of AWS Lambda features, including invoking functions. On this page, you'll find a list of operations the AWS Lambda node supports and links to more resources. Credentials Refer to [AWS Lambda credentials](../../credentials/aws/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Invoke a function ## Templates and examples **Invoke an AWS Lambda function** by amudhan [View template details](https://n8n.io/workflows/510-invoke-an-aws-lambda-function/) **Convert and Manipulate PDFs with Api2Pdf and AWS Lambda** by David Ashby [View template details](https://n8n.io/workflows/5522-convert-and-manipulate-pdfs-with-api2pdf-and-aws-lambda/) **Detect AWS Orphaned Resources & Send Cost Reports to Slack, Email, and Sheets** by Chad M. Crowell [View template details](https://n8n.io/workflows/11612-detect-aws-orphaned-resources-and-send-cost-reports-to-slack-email-and-sheets/) [Browse AWS Lambda integration templates](https://n8n.io/integrations/aws-lambda/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Rekognition node Use the AWS Rekognition node to automate work in AWS Rekognition, and integrate AWS Rekognition with other applications. n8n has built-in support for a wide range of AWS Rekognition features, including analyzing images. On this page, you'll find a list of operations the AWS Rekognition node supports and links to more resources. Credentials Refer to [AWS Rekognition credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations **Image** - Analyze ## Templates and examples [Browse AWS Rekognition integration templates](https://n8n.io/integrations/aws-rekognition/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS S3 node Use the AWS S3 node to automate work in AWS S3, and integrate AWS S3 with other applications. n8n has built-in support for a wide range of AWS S3 features, including creating and deleting buckets, copying and downloading files, as well as getting folders. On this page, you'll find a list of operations the AWS S3 node supports and links to more resources. Credentials Refer to [AWS credentials](../../credentials/aws/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Bucket - Create a bucket - Delete a bucket - Get all buckets - Search within a bucket - File - Copy a file - Delete a file - Download a file - Get all files - Upload a file - Folder - Create a folder - Delete a folder - Get all folders ## Templates and examples **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) **Extract and store text from chat images using AWS S3** by Lorena [View template details](https://n8n.io/workflows/1393-extract-and-store-text-from-chat-images-using-aws-s3/) **Sync data between Google Drive and AWS S3** by Lorena [View template details](https://n8n.io/workflows/1396-sync-data-between-google-drive-and-aws-s3/) [Browse AWS S3 integration templates](https://n8n.io/integrations/aws-s3/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS SES node Use the AWS SES node to automate work in AWS SES, and integrate AWS SES with other applications. n8n has built-in support for a wide range of AWS SES features, including creating, getting, deleting, sending, updating, and adding templates and emails. On this page, you'll find a list of operations the AWS SES node supports and links to more resources. Credentials Refer to [AWS SES credentials](../../credentials/aws/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Custom Verification Email - Create a new custom verification email template - Delete an existing custom verification email template - Get the custom email verification template - Get all the existing custom verification email templates for your account - Add an email address to the list of identities - Update an existing custom verification email template. - Email - Send - Send Template - Template - Create a template - Delete a template - Get a template - Get all templates - Update a template ## Templates and examples **Create screenshots with uProc, save to Dropbox and send by email** by Miquel Colomer [View template details](https://n8n.io/workflows/857-create-screenshots-with-uproc-save-to-dropbox-and-send-by-email/) **Send an email using AWS SES** by amudhan [View template details](https://n8n.io/workflows/507-send-an-email-using-aws-ses/) **Auto-Notify on New Major n8n Releases via RSS, Email & Telegram** by Miquel Colomer [View template details](https://n8n.io/workflows/736-auto-notify-on-new-major-n8n-releases-via-rss-email-and-telegram/) [Browse AWS SES integration templates](https://n8n.io/integrations/aws-ses/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS SNS node Use the AWS SNS node to automate work in AWS SNS, and integrate AWS SNS with other applications. n8n has built-in support for a wide range of AWS SNS features, including publishing messages. On this page, you'll find a list of operations the AWS SNS node supports and links to more resources. Credentials Refer to [AWS SNS credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations - Publish a message to a topic ## Templates and examples [Browse AWS SNS integration templates](https://n8n.io/integrations/aws-sns/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS SQS node Use the AWS SQS node to automate work in AWS SNS, and integrate AWS SQS with other applications. n8n has built-in support for a wide range of AWS SQS features, including sending messages. On this page, you'll find a list of operations the AWS SQS node supports and links to more resources. Credentials Refer to [AWS SQS credentials](../../credentials/aws/) for guidance on setting up authentication. ## Operations - Send a message to a queue. ## Templates and examples [Browse AWS SQS integration templates](https://n8n.io/integrations/aws-sqs/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Textract node Use the AWS Textract node to automate work in AWS Textract, and integrate AWS Textract with other applications. n8n has built-in support for a wide range of AWS Textract features, including analyzing invoices. On this page, you'll find a list of operations the AWS Textract node supports and links to more resources. Credentials Refer to [AWS Textract credentials](../../credentials/aws/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Analyze Receipt or Invoice ## Templates and examples **Extract and store text from chat images using AWS S3** by Lorena [View template details](https://n8n.io/workflows/1393-extract-and-store-text-from-chat-images-using-aws-s3/) **Send a file from S3 to AWS Textract** by Tom [View template details](https://n8n.io/workflows/1282-send-a-file-from-s3-to-aws-textract/) **Summarize invoices with AWS Textract, Google Gemini, and send to Slack** by Pixcels Themes [View template details](https://n8n.io/workflows/13803-summarize-invoices-with-aws-textract-google-gemini-and-send-to-slack/) [Browse AWS Textract integration templates](https://n8n.io/integrations/aws-textract/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # AWS Transcribe node Use the AWS Transcribe node to automate work in AWS Transcribe, and integrate AWS Transcribe with other applications. n8n has built-in support for a wide range of AWS Transcribe features, including creating, deleting, and getting transcription jobs. On this page, you'll find a list of operations the AWS Transcribe node supports and links to more resources. Credentials Refer to [AWS Transcribe credentials](../../credentials/aws/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations **Transcription Job** - Create a transcription job - Delete a transcription job - Get a transcription job - Get all transcriptions job ## Templates and examples **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) **Create transcription jobs using AWS Transcribe** by Harshil Agrawal [View template details](https://n8n.io/workflows/1111-create-transcription-jobs-using-aws-transcribe/) **🛠️ AWS Transcribe Tool MCP Server 💪 all operations** by David Ashby [View template details](https://n8n.io/workflows/5330-aws-transcribe-tool-mcp-server-all-operations/) [Browse AWS Transcribe integration templates](https://n8n.io/integrations/aws-transcribe/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Azure Cosmos DB node Use the Azure Cosmos DB node to automate work in Azure Cosmos DB and integrate Azure Cosmos DB with other applications. n8n has built-in support for a wide range of Azure Cosmos DB features, which includes creating, getting, updating, and deleting containers and items. On this page, you'll find a list of operations the Azure Cosmos DB node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/azurecosmosdb/). ## Operations - **Container**: - **Create** - **Delete** - **Get** - **Get Many** - **Item**: - **Create** - **Delete** - **Get** - **Get Many** - **Execute Query** - **Update** ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Build Your Own Counseling Chatbot on LINE to Support Mental Health Conversations** by lin@davoy.tech [View template details](https://n8n.io/workflows/2975-build-your-own-counseling-chatbot-on-line-to-support-mental-health-conversations/) **CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync** by Angel Menendez [View template details](https://n8n.io/workflows/3035-callforge-05-gongio-call-analysis-with-azure-ai-and-crm-sync/) [Browse Azure Cosmos DB integration templates](https://n8n.io/integrations/azure-cosmos-db/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Azure Cosmos DB's documentation](https://learn.microsoft.com/en-us/rest/api/cosmos-db/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Azure Storage node The Azure Storage node has built-in support for a wide range of features, which includes creating, getting, and deleting blobs and containers. Use this node to automate work within the Azure Storage service or integrate it with other services in your workflow. On this page, you'll find a list of operations the Azure Storage node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/azurestorage/). ## Operations - **Blob** - **Create blob**: Create a new blob or replace an existing one. - **Delete blob**: Delete an existing blob. - **Get blob**: Retrieve data for a specific blob. - **Get many blobs**: Retrieve a list of blobs. - **Container** - **Create container**: Create a new container. - **Delete container**: Delete an existing container. - **Get container**: Retrieve data for a specific container. - **Get many containers**: Retrieve a list of containers. ## Templates and examples **Automated AWS S3 / Azure / Google to local MinIO Object Backup with Scheduling** by SIENNA [View template details](https://n8n.io/workflows/7284-automated-aws-s3-azure-google-to-local-minio-object-backup-with-scheduling/) **Generate and Store AI Images with DALL-E and Azure Blob Storage** by Trung Tran [View template details](https://n8n.io/workflows/7648-generate-and-store-ai-images-with-dall-e-and-azure-blob-storage/) **Qualify and email literary agents with GPT‑4.1, Gmail and Google Sheets** by malcolm [View template details](https://n8n.io/workflows/12651-qualify-and-email-literary-agents-with-gpt41-gmail-and-google-sheets/) [Browse Azure Storage integration templates](https://n8n.io/integrations/azure-storage/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Microsoft's Azure Storage documentation](https://learn.microsoft.com/en-us/rest/api/storageservices/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # BambooHR node Use the BambooHR node to automate work in BambooHR, and integrate BambooHR with other applications. n8n has built-in support for a wide range of BambooHR features, including creating, deleting, downloading, and getting company reports, employee documents, and files. On this page, you'll find a list of operations the BambooHR node supports and links to more resources. Credentials Refer to [BambooHR credentials](../../credentials/bamboohr/) for guidance on setting up authentication. ## Operations - Company Report - Get a company report - Employee - Create an employee - Get an employee - Get all employees - Update an employee - Employee Document - Delete an employee document - Download an employee document - Get all employee document - Update an employee document - Upload an employee document - File - Delete a company file - Download a company file - Get all company files - Update a company file - Upload a company file ## Templates and examples **BambooHR AI-Powered Company Policies and Benefits Chatbot** by Ludwig [View template details](https://n8n.io/workflows/2850-bamboohr-ai-powered-company-policies-and-benefits-chatbot/) **Test Webhooks in n8n Without Changing WEBHOOK_URL (PostBin & BambooHR Example)** by Ludwig [View template details](https://n8n.io/workflows/2869-test-webhooks-in-n8n-without-changing-webhookurl-postbin-and-bamboohr-example/) **🛠️ BambooHR Tool MCP Server 💪 all 15 operations** by David Ashby [View template details](https://n8n.io/workflows/5329-bamboohr-tool-mcp-server-all-15-operations/) [Browse BambooHR integration templates](https://n8n.io/integrations/bamboohr/), or [search all templates](https://n8n.io/workflows/) # Bannerbear node Use the Bannerbear node to automate work in Bannerbear, and integrate Bannerbear with other applications. n8n has built-in support for a wide range of Bannerbear features, including creating and getting images and templates. On this page, you'll find a list of operations the Bannerbear node supports and links to more resources. Credentials Refer to [Bannerbear credentials](../../credentials/bannerbear/) for guidance on setting up authentication. ## Operations - Image - Create an image - Get an image - Template - Get a template - Get all templates ## Templates and examples **Speed Up Social Media Banners With BannerBear.com** by Jimleuk [View template details](https://n8n.io/workflows/2322-speed-up-social-media-banners-with-bannerbearcom/) **Render custom text over images** by tanaypant [View template details](https://n8n.io/workflows/365-render-custom-text-over-images/) **Send Airtable data as tasks to Trello** by tanaypant [View template details](https://n8n.io/workflows/385-send-airtable-data-as-tasks-to-trello/) [Browse Bannerbear integration templates](https://n8n.io/integrations/bannerbear/), or [search all templates](https://n8n.io/workflows/) # Baserow node Use the Baserow node to automate work in Baserow, and integrate Baserow with other applications. n8n has built-in support for a wide range of Baserow features, including creating, getting, retrieving, and updating rows. On this page, you'll find a list of operations the Baserow node supports and links to more resources. Credentials Refer to [Baserow credentials](../../credentials/baserow/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Row - Create multiple rows - Delete multiple rows - Update multiple rows - Create a row - Delete a row - Get a row - Get many rows - Update a row ## Templates and examples **All-in-One Telegram/Baserow AI Assistant 🤖🧠 Voice/Photo/Save Notes/Long Term Mem** by Rod [View template details](https://n8n.io/workflows/2986-all-in-one-telegrambaserow-ai-assistant-voicephotosave-noteslong-term-mem/) **User Enablement Demo** by jason [View template details](https://n8n.io/workflows/1500-user-enablement-demo/) **Create AI Videos with OpenAI Scripts, Leonardo Images & HeyGen Avatars** by Abdellah Homrani [View template details](https://n8n.io/workflows/4107-create-ai-videos-with-openai-scripts-leonardo-images-and-heygen-avatars/) [Browse Baserow integration templates](https://n8n.io/integrations/baserow/), or [search all templates](https://n8n.io/workflows/) # Beeminder node Use the Beeminder node to automate work in Beeminder, and integrate Beeminder with other applications. n8n has built-in support for a wide range of Beeminder features, including creating, deleting, and updating data points. On this page, you'll find a list of operations the Beeminder node supports and links to more resources. Credentials Refer to [Beeminder credentials](../../credentials/beeminder/) for guidance on setting up authentication. ## Operations **data point** - Create data point for a goal - Delete a data point - Get all data points for a goal - Update a data point ## Templates and examples [Browse Beeminder integration templates](https://n8n.io/integrations/beeminder/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Bitly node Use the Bitly node to automate work in Bitly, and integrate Bitly with other applications. n8n has built-in support for a wide range of Bitly features, including creating, getting, and updating links. On this page, you'll find a list of operations the Bitly node supports and links to more resources. Credentials Refer to [Bitly credentials](../../credentials/bitly/) for guidance on setting up authentication. ## Operations - Link - Create a link - Get a link - Update a link ## Templates and examples **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) **Create a URL on Bitly** by sshaligr [View template details](https://n8n.io/workflows/442-create-a-url-on-bitly/) **Automate URL shortening with Bitly using Llama3 chat interface** by Ghufran Ridhawi [View template details](https://n8n.io/workflows/3885-automate-url-shortening-with-bitly-using-llama3-chat-interface/) [Browse Bitly integration templates](https://n8n.io/integrations/bitly/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Bitwarden node Use the Bitwarden node to automate work in Bitwarden, and integrate Bitwarden with other applications. n8n has built-in support for a wide range of Bitwarden features, including creating, getting, deleting, and updating collections, events, groups, and members. On this page, you'll find a list of operations the Bitwarden node supports and links to more resources. Credentials Refer to [Bitwarden credentials](../../credentials/bitwarden/) for guidance on setting up authentication. ## Operations - Collection - Delete - Get - Get All - Update - Event - Get All - Group - Create - Delete - Get - Get All - Get Members - Update - Update Members - Member - Create - Delete - Get - Get All - Get Groups - Update - Update Groups ## Templates and examples [Browse Bitwarden integration templates](https://n8n.io/integrations/bitwarden/), or [search all templates](https://n8n.io/workflows/) # Box node Use the Box node to automate work in Box, and integrate Box with other applications. n8n has built-in support for a wide range of Box features, including creating, copying, deleting, searching, uploading, and downloading files and folders. On this page, you'll find a list of operations the Box node supports and links to more resources. Credentials Refer to [Box credentials](../../credentials/box/) for guidance on setting up authentication. ## Operations - File - Copy a file - Delete a file - Download a file - Get a file - Search files - Share a file - Upload a file - Folder - Create a folder - Get a folder - Delete a folder - Search files - Share a folder - Update folder ## Templates and examples **Automated Video Translation & Distribution with DubLab to Multiple Platforms** by Behram [View template details](https://n8n.io/workflows/4751-automated-video-translation-and-distribution-with-dublab-to-multiple-platforms/) **Create a new folder in Box** by amudhan [View template details](https://n8n.io/workflows/559-create-a-new-folder-in-box/) **Receive updates for events in Box** by amudhan [View template details](https://n8n.io/workflows/560-receive-updates-for-events-in-box/) [Browse Box integration templates](https://n8n.io/integrations/box/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Brandfetch node Use the Brandfetch node to automate work in Brandfetch, and integrate Brandfetch with other applications. n8n has built-in support for a wide range of Brandfetch features, including returning a company’s information. On this page, you'll find a list of operations the Brandfetch node supports and links to more resources. Credentials Refer to [Brandfetch credentials](../../credentials/brandfetch/) for guidance on setting up authentication. ## Operations - Return a company's colors - Return a company's data - Return a company's fonts - Return a company's industry - Return a company's logo & icon ## Templates and examples [Browse Brandfetch integration templates](https://n8n.io/integrations/brandfetch/), or [search all templates](https://n8n.io/workflows/) # Brevo node Use the Brevo node to automate work in Brevo, and integrate Brevo with other applications. n8n has built-in support for a wide range of Brevo features, including creating, updating, deleting, and getting contacts, attributes, as well as sending emails. On this page, you'll find a list of operations the Brevo node supports and links to more resources. Credentials Refer to [Brevo credentials](../../credentials/brevo/) for guidance on setting up authentication. ## Operations - Contact - Create - Create or Update - Delete - Get - Get All - Update - Contact Attribute - Create - Delete - Get All - Update - Email - Send - Send Template - Sender - Create - Delete - Get All ## Templates and examples **Smart Email Auto-Responder Template using AI** by Amjid Ali [View template details](https://n8n.io/workflows/3277-smart-email-auto-responder-template-using-ai/) **Automate Lead Generation with Apollo, AI Scoring and Brevo Email Outreach** by Luka Zivkovic [View template details](https://n8n.io/workflows/4539-automate-lead-generation-with-apollo-ai-scoring-and-brevo-email-outreach/) **Create Leads in SuiteCRM, synchronize with Brevo and notify in NextCloud** by algopi.io [View template details](https://n8n.io/workflows/2291-create-leads-in-suitecrm-synchronize-with-brevo-and-notify-in-nextcloud/) [Browse Brevo integration templates](https://n8n.io/integrations/brevo/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Bubble node Use the Bubble node to automate work in Bubble, and integrate Bubble with other applications. n8n has built-in support for a wide range of Bubble features, including creating, deleting, getting, and updating objects. On this page, you'll find a list of operations the Bubble node supports and links to more resources. Credentials Refer to [Bubble credentials](../../credentials/bubble/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Object - Create - Delete - Get - Get All - Update ## Templates and examples **Create, update, and get an object from Bubble** by Harshil Agrawal [View template details](https://n8n.io/workflows/1041-create-update-and-get-an-object-from-bubble/) **Access data from bubble application** by jason [View template details](https://n8n.io/workflows/879-access-data-from-bubble-application/) **AI Agent Integration for Bubble Apps with MCP Protocol Data Access** by Mohamed Salama [View template details](https://n8n.io/workflows/4952-ai-agent-integration-for-bubble-apps-with-mcp-protocol-data-access/) [Browse Bubble integration templates](https://n8n.io/integrations/bubble/), or [search all templates](https://n8n.io/workflows/) # Chargebee node Use the Chargebee node to automate work in Chargebee, and integrate Chargebee with other applications. n8n has built-in support for a wide range of Chargebee features, including creating customers, returning invoices, and canceling subscriptions. On this page, you'll find a list of operations the Chargebee node supports and links to more resources. Credentials Refer to [Chargebee credentials](../../credentials/chargebee/) for guidance on setting up authentication. ## Operations - Customer - Create a customer - Invoice - Return the invoices - Get URL for the invoice PDF - Subscription - Cancel a subscription - Delete a subscription ## Templates and examples [Browse Chargebee integration templates](https://n8n.io/integrations/chargebee/), or [search all templates](https://n8n.io/workflows/) # CircleCI node Use the CircleCI node to automate work in CircleCI, and integrate CircleCI with other applications. n8n has built-in support for a wide range of CircleCI features, including getting and triggering pipelines. On this page, you'll find a list of operations the CircleCI node supports and links to more resources. Credentials Refer to [CircleCI credentials](../../credentials/circleci/) for guidance on setting up authentication. ## Operations - Pipeline - Get a pipeline - Get all pipelines - Trigger a pipeline ## Templates and examples [Browse CircleCI integration templates](https://n8n.io/integrations/circleci/), or [search all templates](https://n8n.io/workflows/) # Webex by Cisco node Use the Webex by Cisco node to automate work in Webex, and integrate Webex with other applications. n8n has built-in support for a wide range of Webex features, including creating, getting, updating, and deleting meetings and messages. On this page, you'll find a list of operations the Webex node supports and links to more resources. Credentials Refer to [Webex credentials](../../credentials/ciscowebex/) for guidance on setting up authentication. Examples and Templates For usage examples and templates to help you get started, take a look at n8n's [Webex integrations](https://n8n.io/integrations/webex-by-cisco/) list. ## Operations - Meeting - Create - Delete - Get - Get All - Update - Message - Create - Delete - Get - Get All - Update ## Templates and examples [Browse Webex by Cisco integration templates](https://n8n.io/integrations/webex-by-cisco/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Clearbit node Use the Clearbit node to automate work in Clearbit, and integrate Clearbit with other applications. n8n has built-in support for a wide range of Clearbit features, including autocompleting and looking up companies and persons. On this page, you'll find a list of operations the Clearbit node supports and links to more resources. Credentials Refer to [Clearbit credentials](../../credentials/clearbit/) for guidance on setting up authentication. ## Operations - Company - Auto-complete company names and retrieve logo and domain - Look up person and company data based on an email or domain - Person - Look up a person and company data based on an email or domain ## Templates and examples **Summarize social media activity of a company before a call** by Milorad Filipović [View template details](https://n8n.io/workflows/2125-summarize-social-media-activity-of-a-company-before-a-call/) **Verify emails & enrich new form leads and save them to HubSpot** by Niklas Hatje [View template details](https://n8n.io/workflows/2116-verify-emails-and-enrich-new-form-leads-and-save-them-to-hubspot/) **List social media activity of a company before a call** by Milorad Filipović [View template details](https://n8n.io/workflows/2124-list-social-media-activity-of-a-company-before-a-call/) [Browse Clearbit integration templates](https://n8n.io/integrations/clearbit/), or [search all templates](https://n8n.io/workflows/) # ClickUp node Use the ClickUp node to automate work in ClickUp, and integrate ClickUp with other applications. n8n has built-in support for a wide range of ClickUp features, including creating, getting, deleting, and updating folders, checklists, tags, comments, and goals. On this page, you'll find a list of operations the ClickUp node supports and links to more resources. Credentials Refer to [ClickUp credentials](../../credentials/clickup/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Checklist - Create a checklist - Delete a checklist - Update a checklist - Checklist Item - Create a checklist item - Delete a checklist item - Update a checklist item - Comment - Create a comment - Delete a comment - Get all comments - Update a comment - Folder - Create a folder - Delete a folder - Get a folder - Get all folders - Update a folder - Goal - Create a goal - Delete a goal - Get a goal - Get all goals - Update a goal - Goal Key Result - Create a key result - Delete a key result - Update a key result - List - Create a list - Retrieve list's custom fields - Delete a list - Get a list - Get all lists - Get list members - Update a list - Space Tag - Create a space tag - Delete a space tag - Get all space tags - Update a space tag - Task - Create a task - Delete a task - Get a task - Get all tasks - Get task members - Set a custom field - Update a task - Task List - Add a task to a list - Remove a task from a list - Task Tag - Add a tag to a task - Remove a tag from a task - Task Dependency - Create a task dependency - Delete a task dependency - Time Entry - Create a time entry - Delete a time entry - Get a time entry - Get all time entries - Start a time entry - Stop the current running timer - Update a time Entry - Time Entry Tag - Add tag to time entry - Get all time entry tags - Remove tag from time entry ## Operation details ### Get a task When using the **Get a task** operation, you can optionally enable the following: - **Include Subtasks**: When enabled, also fetches and includes subtasks for the specified task. - **Include Markdown Description**: When enabled, includes the `markdown_description` field in the response, which preserves links and formatting in the task description. This is useful if your task descriptions contain links or rich formatting. ## Templates and examples **Zoom AI Meeting Assistant creates mail summary, ClickUp tasks and follow-up call** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2800-zoom-ai-meeting-assistant-creates-mail-summary-clickup-tasks-and-follow-up-call/) **Create a task in ClickUp** by tanaypant [View template details](https://n8n.io/workflows/485-create-a-task-in-clickup/) **Sync Notion database pages as ClickUp tasks** by n8n Team [View template details](https://n8n.io/workflows/1835-sync-notion-database-pages-as-clickup-tasks/) [Browse ClickUp integration templates](https://n8n.io/integrations/clickup/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Clockify node Use the Clockify node to automate work in Clockify, and integrate Clockify with other applications. n8n has built-in support for a wide range of Clockify features, including creating, updating, getting, and deleting tasks, time entries, projects, and tags. On this page, you'll find a list of operations the Clockify node supports and links to more resources. Credentials Refer to [Clockify credentials](../../credentials/clockify/) for guidance on setting up authentication. ## Operations - Project - Create a project - Delete a project - Get a project - Get all projects - Update a project - Tag - Create a tag - Delete a tag - Get all tags - Update a tag - Task - Create a task - Delete a task - Get a task - Get all tasks - Update a task - Time Entry - Create a time entry - Delete a time entry - Get time entry - Update a time entry ## Templates and examples **Time logging on Clockify using Slack** by Blockia Labs [View template details](https://n8n.io/workflows/2604-time-logging-on-clockify-using-slack/) **Manage projects in Clockify** by Harshil Agrawal [View template details](https://n8n.io/workflows/701-manage-projects-in-clockify/) **Update time-tracking projects based on Syncro status changes** by Jonathan [View template details](https://n8n.io/workflows/1492-update-time-tracking-projects-based-on-syncro-status-changes/) [Browse Clockify integration templates](https://n8n.io/integrations/clockify/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Cloudflare node Use the Cloudflare node to automate work in Cloudflare, and integrate Cloudflare with other applications. n8n has built-in support for a wide range of Cloudflare features, including deleting, getting, and uploading zone certificates. On this page, you'll find a list of operations the Cloudflare node supports and links to more resources. Credentials Refer to [Cloudflare credentials](../../credentials/cloudflare/) for guidance on setting up authentication. ## Operations - Zone Certificate - Delete - Get - Get Many - Upload ## Templates and examples **Report phishing websites to Steam and CloudFlare** by chaufnet [View template details](https://n8n.io/workflows/122-report-phishing-websites-to-steam-and-cloudflare/) **KV - Cloudflare Key-Value Database Full API Integration Workflow** by Nskha [View template details](https://n8n.io/workflows/2046-kv-cloudflare-key-value-database-full-api-integration-workflow/) **Extract University Term Dates from Excel using CloudFlare Markdown Conversion** by Jimleuk [View template details](https://n8n.io/workflows/3505-extract-university-term-dates-from-excel-using-cloudflare-markdown-conversion/) [Browse Cloudflare integration templates](https://n8n.io/integrations/cloudflare/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Cloudflare's API documentation on zone-level authentication](https://api.cloudflare.com/#zone-level-authenticated-origin-pulls-properties) for more information on this service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Cockpit node Use the Cockpit node to automate work in Cockpit, and integrate Cockpit with other applications. n8n has built-in support for a wide range of Cockpit features, including creating a collection entry, storing data from a form submission, and getting singletons. On this page, you'll find a list of operations the Cockpit node supports and links to more resources. Credentials Refer to [Cockpit credentials](../../credentials/cockpit/) for guidance on setting up authentication. ## Operations - Collection - Create a collection entry - Get all collection entries - Update a collection entry - Form - Store data from a form submission - Singleton - Get a singleton ## Templates and examples [Browse Cockpit integration templates](https://n8n.io/integrations/cockpit/), or [search all templates](https://n8n.io/workflows/) # Coda node Use the Coda node to automate work in Coda, and integrate Coda with other applications. n8n has built-in support for a wide range of Coda features, including creating, getting, and deleting controls, formulas, tables, and views. On this page, you'll find a list of operations the Coda node supports and links to more resources. Credentials Refer to [Coda credentials](../../credentials/coda/) for guidance on setting up authentication. ## Operations - Control - Get a control - Get all controls - Formula - Get a formula - Get all formulas - Table - Create/Insert a row - Delete one or multiple rows - Get all columns - Get all the rows - Get a column - Get a row - Pushes a button - View - Delete view row - Get a view - Get all views - Get all views columns - Get all views rows - Update row - Push view button ## Templates and examples **Insert data into a new row for a table in Coda** by tanaypant [View template details](https://n8n.io/workflows/482-insert-data-into-a-new-row-for-a-table-in-coda/) **AI Agent Managed Tables and Views with 🛠️ Coda Tool MCP Server 💪 18 operations** by David Ashby [View template details](https://n8n.io/workflows/5319-ai-agent-managed-tables-and-views-with-coda-tool-mcp-server-18-operations/) **Sync your HRIS employee directory with Microsoft Teams, Coda, and Slack** by vinci-king-01 [View template details](https://n8n.io/workflows/12586-sync-your-hris-employee-directory-with-microsoft-teams-coda-and-slack/) [Browse Coda integration templates](https://n8n.io/integrations/coda/), or [search all templates](https://n8n.io/workflows/) # CoinGecko node Use the CoinGecko node to automate work in CoinGecko, and integrate CoinGecko with other applications. n8n has built-in support for a wide range of CoinGecko features, including getting coins and events. On this page, you'll find a list of operations the CoinGecko node supports and links to more resources. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Coin - Get a candlestick open-high-low-close chart for the selected currency - Get current data for a coin - Get all coins - Get historical data (name, price, market, stats) at a given date for a coin - Get prices and market related data for all trading pairs that match the selected currency - Get historical market data include price, market cap, and 24h volume (granularity auto) - Get the current price of any cryptocurrencies in any other supported currencies that you need - Get coin tickers - Event - Get all events ## Templates and examples **Analyze Crypto Market with CoinGecko: Volatility Metrics & Investment Signals** by ist00dent [View template details](https://n8n.io/workflows/4115-analyze-crypto-market-with-coingecko-volatility-metrics-and-investment-signals/) **Tracking your crypto portfolio in Airtable** by jason [View template details](https://n8n.io/workflows/859-tracking-your-crypto-portfolio-in-airtable/) **Get the price of BTC in EUR and send an SMS** by Harshil Agrawal [View template details](https://n8n.io/workflows/704-get-the-price-of-btc-in-eur-and-send-an-sms/) [Browse CoinGecko integration templates](https://n8n.io/integrations/coingecko/), or [search all templates](https://n8n.io/workflows/) # Contentful node Use the Contentful node to automate work in Contentful, and integrate Contentful with other applications. n8n has built-in support for a wide range of Contentful features, including getting assets, content types, entries, locales, and space. On this page, you'll find a list of operations the Contentful node supports and links to more resources. Credentials Refer to [Contentful credentials](../../credentials/contentful/) for guidance on setting up authentication. ## Operations - Asset - Get - Get All - Content Type - Get - Entry - Get - Get All - Locale - Get All - Space - Get ## Templates and examples **Generate Knowledge Base Articles with GPT & Perplexity AI for Contentful CMS** by Varritech [View template details](https://n8n.io/workflows/4012-generate-knowledge-base-articles-with-gpt-and-perplexity-ai-for-contentful-cms/) **Convert Markdown Content to Contentful Rich Text with AI Formatting** by Varritech [View template details](https://n8n.io/workflows/4078-convert-markdown-content-to-contentful-rich-text-with-ai-formatting/) **Get all the entries from Contentful** by Harshil Agrawal [View template details](https://n8n.io/workflows/640-get-all-the-entries-from-contentful/) [Browse Contentful integration templates](https://n8n.io/integrations/contentful/), or [search all templates](https://n8n.io/workflows/) # ConvertKit node Use the ConvertKit node to automate work in ConvertKit, and integrate ConvertKit with other applications. n8n has built-in support for a wide range of ConvertKit features, including creating and deleting custom fields, getting tags, and adding subscribers. On this page, you'll find a list of operations the ConvertKit node supports and links to more resources. Credentials Refer to [ConvertKit credentials](../../credentials/convertkit/) for guidance on setting up authentication. ## Operations - Custom Field - Create a field - Delete a field - Get all fields - Update a field - Form - Add a subscriber - Get all forms - List subscriptions to a form including subscriber data - Sequence - Add a subscriber - Get all sequences - Get all subscriptions to a sequence including subscriber data - Tag - Create a tag - Get all tags - Tag Subscriber - Add a tag to a subscriber - List subscriptions to a tag including subscriber data - Delete a tag from a subscriber ## Templates and examples **Enrich lead captured by ConvertKit and save it in Hubspot** by Ricardo Espinozaas [View template details](https://n8n.io/workflows/2130-enrich-lead-captured-by-convertkit-and-save-it-in-hubspot/) **Manage subscribers in ConvertKit** by Harshil Agrawal [View template details](https://n8n.io/workflows/642-manage-subscribers-in-convertkit/) **Receive updates on a subscriber added in ConvertKit** by Harshil Agrawal [View template details](https://n8n.io/workflows/644-receive-updates-on-a-subscriber-added-in-convertkit/) [Browse ConvertKit integration templates](https://n8n.io/integrations/convertkit/), or [search all templates](https://n8n.io/workflows/) # Copper node Use the Copper node to automate work in Copper, and integrate Copper with other applications. n8n has built-in support for a wide range of Copper features, including getting, updating, deleting, and creating companies, customer sources, leads, projects and tasks. On this page, you'll find a list of operations the Copper node supports and links to more resources. Credentials Refer to [Copper credentials](../../credentials/copper/) for guidance on setting up authentication. ## Operations - Company - Create - Delete - Get - Get All - Update - Customer Source - Get All - Lead - Create - Delete - Get - Get All - Update - Opportunity - Create - Delete - Get - Get All - Update - Person - Create - Delete - Get - Get All - Update - Project - Create - Delete - Get - Get All - Update - Task - Create - Delete - Get - Get All - Update - User - Get All ## Templates and examples **Create, update, and get a person from Copper** by Harshil Agrawal [View template details](https://n8n.io/workflows/1021-create-update-and-get-a-person-from-copper/) **Receive updates on a new project created in Copper** by amudhan [View template details](https://n8n.io/workflows/537-receive-updates-on-a-new-project-created-in-copper/) **Let AI Agents Run Your CRM with Copper Tool MCP Server 💪 all 32 operations** by David Ashby [View template details](https://n8n.io/workflows/5315-let-ai-agents-run-your-crm-with-copper-tool-mcp-server-all-32-operations/) [Browse Copper integration templates](https://n8n.io/integrations/copper/), or [search all templates](https://n8n.io/workflows/) # Cortex node Use the Cortex node to automate work in Cortex, and integrate Cortex with other applications. n8n has built-in support for a wide range of Cortex features, including executing analyzers, and responders, as well as getting job details. On this page, you'll find a list of operations the Cortex node supports and links to more resources. Credentials Refer to [Cortex credentials](../../credentials/cortex/) for guidance on setting up authentication. ## Operations - Analyzer - Execute Analyzer - Job - Get job details - Get job report - Responder - Execute Responder ## Templates and examples [Browse Cortex integration templates](https://n8n.io/integrations/cortex/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # CrateDB node Use the CrateDB node to automate work in CrateDB, and integrate CrateDB with other applications. n8n has built-in support for a wide range of CrateDB features, including executing, inserting, and updating rows in the database. On this page, you'll find a list of operations the CrateDB node supports and links to more resources. Credentials Refer to [CrateDB credentials](../../credentials/cratedb/) for guidance on setting up authentication. ## Operations - Execute an SQL query - Insert rows in database - Update rows in database ## Templates and examples [Browse CrateDB integration templates](https://n8n.io/integrations/cratedb/), or [search all templates](https://n8n.io/workflows/) ## Node reference ### Specify a column's data type To specify a column's data type, append the column name with `:type`, where `type` is the data type you want for the column. For example, if you want to specify the type `int` for the column **id** and type `text` for the column **name**, you can use the following snippet in the **Columns** field: `id:int,name:text`. # crowd.dev node Use the crowd.dev node to automate work in crowd.dev and integrate crowd.dev with other applications. n8n has built-in support for a wide range of crowd.dev features, which includes creating, updating, and deleting members, notes, organizations, and tasks. On this page, you'll find a list of operations the crowd.dev node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/crowddev/). ## Operations - Activity - Create or Update with a Member - Create - Automation - Create - Destroy - Find - List - Update - Member - Create or Update - Delete - Find - Update - Note - Create - Delete - Find - Update - Organization - Create - Delete - Find - Update - Task - Create - Delete - Find - Update ## Templates and examples [Browse crowd.dev integration templates](https://n8n.io/integrations/crowddev/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for crowd.dev. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.crowddevtrigger/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Customer.io node Use the Customer.io node to automate work in Customer.io, and integrate Customer.io with other applications. n8n has built-in support for a wide range of Customer.io features, including creating and updating customers, tracking events, and getting campaigns. On this page, you'll find a list of operations the Customer.io node supports and links to more resources. Credentials Refer to [Customer.io credentials](../../credentials/customerio/) for guidance on setting up authentication. ## Operations - Customer - Create/Update a customer. - Delete a customer. - Event - Track a customer event. - Track an anonymous event. - Campaign - Get - Get All - Get Metrics - Segment - Add Customer - Remove Customer ## Templates and examples **Create a customer and add them to a segment in Customer.io** by Harshil Agrawal [View template details](https://n8n.io/workflows/646-create-a-customer-and-add-them-to-a-segment-in-customerio/) **Receive updates when a subscriber unsubscribes in Customer.io** by Harshil Agrawal [View template details](https://n8n.io/workflows/645-receive-updates-when-a-subscriber-unsubscribes-in-customerio/) **AI Agent Powered Marketing 🛠️ Customer.io Tool MCP Server 💪 all 9 operations** by David Ashby [View template details](https://n8n.io/workflows/5314-ai-agent-powered-marketing-customerio-tool-mcp-server-all-9-operations/) [Browse Customer.io integration templates](https://n8n.io/integrations/customerio/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Databricks node Use the Databricks node to automate work in Databricks, and integrate Databricks with other applications. n8n has built-in support for a wide range of Databricks features, including executing SQL queries, managing Unity Catalog objects, querying ML model serving endpoints, and working with vector search indexes. On this page, you'll find a list of operations the Databricks node supports and links to more resources. Credentials Refer to [Databricks credentials](../../credentials/databricks/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Databricks SQL - Execute Query - File - Create Directory - Delete Directory - Delete File - Download File - Get File Metadata - List Directory - Upload File - Genie - Create Conversation Message - Execute Message SQL Query - Get Conversation Message - Get Genie Space - Get Query Results - Start Conversation - Model Serving - Query Endpoint - Unity Catalog - Create Catalog - Create Function - Create Volume - Delete Catalog - Delete Function - Delete Volume - Get Catalog - Get Function - Get Table - Get Volume - List Catalogs - List Functions - List Tables - List Volumes - Update Catalog - Vector Search - Create Index - Get Index - List Indexes - Query Index ## Templates and examples [Browse Databricks integration templates](https://n8n.io/integrations/databricks/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Databricks' REST API documentation](https://docs.databricks.com/api/) for details about their API. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # DeepL node Use the DeepL node to automate work in DeepL, and integrate DeepL with other applications. n8n has built-in support for a wide range of DeepL features, including translating languages. On this page, you'll find a list of operations the DeepL node supports and links to more resources. Credentials Refer to [DeepL credentials](../../credentials/deepl/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Language - Translate data ## Templates and examples **Translate PDF documents from Google drive folder with DeepL** by Milorad Filipovic [View template details](https://n8n.io/workflows/2179-translate-pdf-documents-from-google-drive-folder-with-deepl/) **Translate cocktail instructions using DeepL** by Harshil Agrawal [View template details](https://n8n.io/workflows/998-translate-cocktail-instructions-using-deepl/) **Real-time chat translation with DeepL** by Ghufran Ridhawi [View template details](https://n8n.io/workflows/4532-real-time-chat-translation-with-deepl/) [Browse DeepL integration templates](https://n8n.io/integrations/deepl/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Demio node Use the Demio node to automate work in Demio, and integrate Demio with other applications. n8n has built-in support for a wide range of Demio features, including getting, and registering events and reports. On this page, you'll find a list of operations the Demio node supports and links to more resources. Credentials Refer to [Demio credentials](../../credentials/demio/) for guidance on setting up authentication. ## Operations - Event - Get an event - Get all events - Register someone to an event - Report - Get an event report ## Templates and examples [Browse Demio integration templates](https://n8n.io/integrations/demio/), or [search all templates](https://n8n.io/workflows/) # DHL node Use the DHL node to automate work in DHL, and integrate DHL with other applications. n8n has built-in support for a wide range of DHL features, including tracking shipment. On this page, you'll find a list of operations the DHL node supports and links to more resources. Credentials Refer to [DHL credentials](../../credentials/dhl/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Shipment - Get Tracking Details ## Templates and examples **AI-powered WooCommerce Support-Agent** by Jan Oberhauser [View template details](https://n8n.io/workflows/2161-ai-powered-woocommerce-support-agent/) **Expose Get tracking details to AI Agents via 🛠️ DHL Tool MCP Server** by David Ashby [View template details](https://n8n.io/workflows/5279-expose-get-tracking-details-to-ai-agents-via-dhl-tool-mcp-server/) **Automated DHL Shipment Tracking Bot for Web Forms and Email Inquiries** by Yusuke Yamamoto [View template details](https://n8n.io/workflows/9876-automated-dhl-shipment-tracking-bot-for-web-forms-and-email-inquiries/) [Browse DHL integration templates](https://n8n.io/integrations/dhl/), or [search all templates](https://n8n.io/workflows/) # Discourse node Use the Discourse node to automate work in Discourse, and integrate Discourse with other applications. n8n has built-in support for a wide range of Discourse features, including creating, getting, updating, and removing categories, groups, posts, and users. On this page, you'll find a list of operations the Discourse node supports and links to more resources. Credentials Refer to [Discourse credentials](../../credentials/discourse/) for guidance on setting up authentication. ## Operations - Category - Create a category - Get all categories - Update a category - Group - Create a group - Get a group - Get all groups - Update a group - Post - Create a post - Get a post - Get all posts - Update a post - User - Create a user - Get a user - Get all users - User Group - Create a user to group - Remove user from group ## Templates and examples **Enrich new Discourse members with Clearbit then notify in Slack** by Max Tkacz [View template details](https://n8n.io/workflows/2109-enrich-new-discourse-members-with-clearbit-then-notify-in-slack/) **Create, update and get a post via Discourse** by Harshil Agrawal [View template details](https://n8n.io/workflows/930-create-update-and-get-a-post-via-discourse/) **🛠️ Discourse Tool MCP Server 💪 all 16 operations** by David Ashby [View template details](https://n8n.io/workflows/5278-discourse-tool-mcp-server-all-16-operations/) [Browse Discourse integration templates](https://n8n.io/integrations/discourse/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Disqus node Use the Disqus node to automate work in Disqus, and integrate Disqus with other applications. n8n has built-in support for a wide range of Disqus features, including returning forums. On this page, you'll find a list of operations the Disqus node supports and links to more resources. Credentials Refer to [Disqus credentials](../../credentials/disqus/) for guidance on setting up authentication. ## Operations - Forum - Return forum details - Return a list of categories within a forum - Return a list of threads within a forum - Return a list of posts within a forum ## Templates and examples [Browse Disqus integration templates](https://n8n.io/integrations/disqus/), or [search all templates](https://n8n.io/workflows/) # Drift node Use the Drift node to automate work in Drift, and integrate Drift with other applications. n8n has built-in support for a wide range of Drift features, including creating, updating, deleting, and getting contacts. On this page, you'll find a list of operations the Drift node supports and links to more resources. Credentials Refer to [Drift credentials](../../credentials/drift/) for guidance on setting up authentication. ## Operations - Contact - Create a contact - Get custom attributes - Delete a contact - Get a contact - Update a contact ## Templates and examples **Create a contact in Drift** by tanaypant [View template details](https://n8n.io/workflows/497-create-a-contact-in-drift/) **🛠️ Drift Tool MCP Server 💪 5 operations** by David Ashby [View template details](https://n8n.io/workflows/5277-drift-tool-mcp-server-5-operations/) **Track commodity portfolio drift with Google Sheets, Gemini AI and Gmail alerts** by WeblineIndia [View template details](https://n8n.io/workflows/15333-track-commodity-portfolio-drift-with-google-sheets-gemini-ai-and-gmail-alerts/) [Browse Drift integration templates](https://n8n.io/integrations/drift/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Dropbox node Use the Dropbox node to automate work in Dropbox, and integrate Dropbox with other applications. n8n has built-in support for a wide range of Dropbox features, including creating, downloading, moving, and copying files and folders. On this page, you'll find a list of operations the Dropbox node supports and links to more resources. Credentials Refer to [Dropbox credentials](../../credentials/dropbox/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - File - Copy a file - Delete a file - Download a file - Move a file - Upload a file - Folder - Copy a folder - Create a folder - Delete a folder - Return the files and folders in a given folder - Move a folder - Search - Query ## Templates and examples **Hacker News to Video Content** by Alex Kim [View template details](https://n8n.io/workflows/2557-hacker-news-to-video-content/) **Nightly n8n backup to Dropbox** by Joey D’Anna [View template details](https://n8n.io/workflows/2075-nightly-n8n-backup-to-dropbox/) **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) [Browse Dropbox integration templates](https://n8n.io/integrations/dropbox/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Dropcontact node Use the Dropcontact node to automate work in Dropcontact, and integrate Dropcontact with other applications. n8n has built-in support for a wide range of Dropcontact features, including fetching contacts. On this page, you'll find a list of operations the Dropcontact node supports and links to more resources. Credentials Refer to [Dropcontact credentials](../../credentials/dropcontact/) for guidance on setting up authentication. ## Operations **Contact** - Enrich - Fetch Request ## Templates and examples **Create HubSpot contacts from LinkedIn post interactions** by Pauline [View template details](https://n8n.io/workflows/1323-create-hubspot-contacts-from-linkedin-post-interactions/) **Enrich up to 1500 emails per hour with Dropcontact batch requests** by victor de coster [View template details](https://n8n.io/workflows/2272-enrich-up-to-1500-emails-per-hour-with-dropcontact-batch-requests/) **Enrich Google Sheet contacts with Dropcontact** by Pauline [View template details](https://n8n.io/workflows/1304-enrich-google-sheet-contacts-with-dropcontact/) [Browse Dropcontact integration templates](https://n8n.io/integrations/dropcontact/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # E-goi node Use the E-goi node to automate work in E-goi, and integrate E-goi with other applications. n8n has built-in support for a wide range of E-goi features, including creating, updating, deleting, and getting contacts. On this page, you'll find a list of operations the E-goi node supports and links to more resources. Credentials Refer to [E-goi credentials](../../credentials/egoi/) for guidance on setting up authentication. ## Operations **Contact** - Create a member - Get a member - Get all members - Update a member ## Templates and examples [Browse E-goi integration templates](https://n8n.io/integrations/e-goi/), or [search all templates](https://n8n.io/workflows/) # Elasticsearch node Use the Elasticsearch node to automate work in Elasticsearch, and integrate Elasticsearch with other applications. n8n has built-in support for a wide range of Elasticsearch features, including creating, updating, deleting, and getting documents and indexes. On this page, you'll find a list of operations the Elasticsearch node supports and links to more resources. Credentials Refer to [Elasticsearch credentials](../../credentials/elasticsearch/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Document - Create a document - Delete a document - Get a document - Get all documents - Update a document - Index - Create - Delete - Get - Get All ## Templates and examples **Build Your Own Image Search Using AI Object Detection, CDN and ElasticSearch** by Jimleuk [View template details](https://n8n.io/workflows/2331-build-your-own-image-search-using-ai-object-detection-cdn-and-elasticsearch/) **Create an automated workitem(incident/bug/userstory) in azure devops** by Aditya Gaur [View template details](https://n8n.io/workflows/2500-create-an-automated-workitemincidentbuguserstory-in-azure-devops/) **Dynamic Search Interface with Elasticsearch and Automated Report Generation** by DataMinex [View template details](https://n8n.io/workflows/7235-dynamic-search-interface-with-elasticsearch-and-automated-report-generation/) [Browse Elasticsearch integration templates](https://n8n.io/integrations/elasticsearch/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Elastic Security node Use the Elastic Security node to automate work in Elastic Security, and integrate Elastic Security with other applications. n8n's has built-in support for a wide range of Elastic Security features, including creating, updating, deleting, retrieving, and getting cases. On this page, you'll find a list of operations the Elastic Security node supports and links to more resources. Credentials Refer to [Elastic Security credentials](../../credentials/elasticsecurity/) for guidance on setting up authentication. ## Operations - Case - Create a case - Delete a case - Get a case - Retrieve all cases - Retrieve a summary of all case activity - Update a case - Case Comment - Add a comment to a case - Get a case comment - Retrieve all case comments - Remove a comment from a case - Update a comment in a case - Case Tag - Add a tag to a case - Remove a tag from a case - Connector - Create a connector ## Templates and examples [Browse Elastic Security integration templates](https://n8n.io/integrations/elastic-security/), or [search all templates](https://n8n.io/workflows/) # Emelia node Use the Emelia node to automate work in Emelia, and integrate Emelia with other applications. n8n has built-in support for a wide range of Emelia features, including creating campaigns, and adding contacts to a list. On this page, you'll find a list of operations the Emelia node supports and links to more resources. Credentials Refer to [Emelia credentials](../../credentials/emelia/) for guidance on setting up authentication. ## Operations - Campaign - Add Contact - Create - Get - Get All - Pause - Start - Contact List - Add - Get All ## Templates and examples **Send a message on Mattermost when you get a reply in Emelia** by Harshil Agrawal [View template details](https://n8n.io/workflows/1039-send-a-message-on-mattermost-when-you-get-a-reply-in-emelia/) **Create a campaign, add a contact, and get the campaign from Emelia** by Harshil Agrawal [View template details](https://n8n.io/workflows/961-create-a-campaign-add-a-contact-and-get-the-campaign-from-emelia/) **🛠️ Emelia Tool MCP Server 💪 all 9 operations** by David Ashby [View template details](https://n8n.io/workflows/5272-emelia-tool-mcp-server-all-9-operations/) [Browse Emelia integration templates](https://n8n.io/integrations/emelia/), or [search all templates](https://n8n.io/workflows/) # ERPNext node Use the ERPNext node to automate work in ERPNext, and integrate ERPNext with other applications. n8n has built-in support for a wide range of ERPNext features, including creating, updating, retrieving, and deleting documents. On this page, you'll find a list of operations the ERPNext node supports and links to more resources. Credentials Refer to [ERPNext credentials](../../credentials/erpnext/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations **Document** - Create a document - Delete a document - Retrieve a document - Retrieve all documents - Update a document ## Templates and examples [Browse ERPNext integration templates](https://n8n.io/integrations/erpnext/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Facebook Graph API node Use the Facebook Graph API node to automate work in Facebook Graph API, and integrate Facebook Graph API with other applications. n8n has built-in support for a wide range of Facebook Graph API features, including using queries GET POST DELETE for several parameters like host URL, request methods and much more. On this page, you'll find a list of operations the Facebook Graph API node supports and links to more resources. Credentials Refer to [Facebook Graph API credentials](../../credentials/facebookgraph/) for guidance on setting up authentication. ## Operations - **Default** - GET - POST - DELETE - **Video Uploads** - GET - POST - DELETE ### Parameters This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). - **Host URL**: The host URL for the request. The following options are available: - **Default**: Requests are passed to the `graph.facebook.com` host URL. Used for the majority of requests. - **Video**: Requests are passed to the `graph-video.facebook.com` host URL. Used for video upload requests only. - **HTTP Request Method**: The method to be used for this request, from the following options: - **GET** - **POST** - **DELETE** - **Graph API Version**: The version of the [Facebook Graph API](https://developers.facebook.com/docs/graph-api/changelog) to be used for this request. - **Node**: The node on which to operate, for example `//feed`. Read more about it in the [official Facebook Developer documentation](https://developers.facebook.com/docs/graph-api/using-graph-api). - **Edge**: Edge of the node on which to operate. Edges represent collections of objects which are attached to the node. - **Ignore SSL Issues**: Toggle to still download the response even if SSL certificate validation isn't possible. - **Send Binary File**: Available for `POST` operations. If enabled binary data is sent as the body. Requires setting the following: - **Input Binary Field**: Name of the binary property which contains the data for the file to be uploaded. ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) **Generate Instagram Content from Top Trends with AI Image Generation** by mustafa kendigüzel [View template details](https://n8n.io/workflows/2803-generate-instagram-content-from-top-trends-with-ai-image-generation/) [Browse Facebook Graph API integration templates](https://n8n.io/integrations/facebook-graph-api/), or [search all templates](https://n8n.io/workflows/) # FileMaker node Use the FileMaker node to automate work in FileMaker, and integrate FileMaker with other applications. n8n has built-in support for a wide range of FileMaker features, including creating, finding, getting, editing, and duplicating files. On this page, you'll find a list of operations the FileMaker node supports and links to more resources. Credentials Refer to [FileMaker credentials](../../credentials/filemaker/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Find Records - Get Records - Get Records by Id - Perform Script - Create Record - Edit Record - Duplicate Record - Delete Record ## Templates and examples **Create, update, and retrieve a record from FileMaker** by Harshil Agrawal [View template details](https://n8n.io/workflows/1068-create-update-and-retrieve-a-record-from-filemaker/) **Convert FileMaker Data API to Flat File Array** by Dick [View template details](https://n8n.io/workflows/1537-convert-filemaker-data-api-to-flat-file-array/) **Integrate Xero with FileMaker using Webhooks** by Stathis Askaridis [View template details](https://n8n.io/workflows/2499-integrate-xero-with-filemaker-using-webhooks/) [Browse FileMaker integration templates](https://n8n.io/integrations/filemaker/), or [search all templates](https://n8n.io/workflows/) # Flow node Use the Flow node to automate work in Flow, and integrate Flow with other applications. n8n has built-in support for a wide range of Flow features, including creating, updating, and getting tasks. On this page, you'll find a list of operations the Flow node supports and links to more resources. Credentials Refer to [Flow credentials](../../credentials/flow/) for guidance on setting up authentication. ## Operations - Task - Create a new task - Update a task - Get a task - Get all the tasks ## Templates and examples **Automate Blog Content Creation with OpenAI, Google Sheets & Email Approval Flow** by Billy Christi [View template details](https://n8n.io/workflows/4371-automate-blog-content-creation-with-openai-google-sheets-and-email-approval-flow/) **Automated PDF Invoice Processing & Approval Flow using OpenAI and Google Sheets** by Billy Christi [View template details](https://n8n.io/workflows/4452-automated-pdf-invoice-processing-and-approval-flow-using-openai-and-google-sheets/) **Scale Deal Flow with a Pitch Deck AI Vision, Chatbot and QDrant Vector Store** by Jimleuk [View template details](https://n8n.io/workflows/2464-scale-deal-flow-with-a-pitch-deck-ai-vision-chatbot-and-qdrant-vector-store/) [Browse Flow integration templates](https://n8n.io/integrations/flow/), or [search all templates](https://n8n.io/workflows/) # Freshdesk node Use the Freshdesk node to automate work in Freshdesk and integrate Freshdesk with other applications. n8n has built-in support for a wide range of Freshdesk features, including creating, updating, deleting, and getting contacts and tickets. On this page, you'll find a list of operations the Freshdesk node supports and links to more resources. Credentials Refer to [Freshdesk credentials](../../credentials/freshdesk/) for guidance on setting up authentication. ## Operations - Contact - Create a new contact - Delete a contact - Get a contact - Get all contacts - Update a contact - Ticket - Create a new ticket - Delete a ticket - Get a ticket - Get all tickets - Update a ticket ## Templates and examples **Create ticket on specific customer messages in Telegram** by tanaypant [View template details](https://n8n.io/workflows/368-create-ticket-on-specific-customer-messages-in-telegram/) **Create a new Freshdesk ticket** by amudhan [View template details](https://n8n.io/workflows/448-create-a-new-freshdesk-ticket/) **Automate CSAT Surveys with Freshdesk & Store Responses in Google Sheets** by PollupAI [View template details](https://n8n.io/workflows/4238-automate-csat-surveys-with-freshdesk-and-store-responses-in-google-sheets/) [Browse Freshdesk integration templates](https://n8n.io/integrations/freshdesk/), or [search all templates](https://n8n.io/workflows/) # Freshservice node Use the Freshservice node to automate work in Freshservice and integrate Freshservice with other applications. n8n has built-in support for a wide range of Freshdesk features, including creating, updating, deleting, and getting agent information and departments. On this page, you'll find a list of operations the Freshservice node supports and links to more resources. Credentials Refer to [Freshservice credentials](../../credentials/freshservice/) for guidance on setting up authentication. ## Operations - Agent - Create an agent - Delete an agent - Retrieve an agent - Retrieve all agents - Update an agent - Agent Group - Create an agent group - Delete an agent group - Retrieve an agent group - Retrieve all agent groups - Update an agent group - Agent Role - Retrieve an agent role - Retrieve all agent roles - Announcement - Create an announcement - Delete an announcement - Retrieve an announcement - Retrieve all announcements - Update an announcement - Asset Type - Create an asset type - Delete an asset type - Retrieve an asset type - Retrieve all asset types - Update an asset type - Change - Create a change - Delete a change - Retrieve a change - Retrieve all changes - Update a change - Department - Create a department - Delete a department - Retrieve a department - Retrieve all departments - Update a department - Location - Create a location - Delete a location - Retrieve a location - Retrieve all locations - Update a location - Problem - Create a problem - Delete a problem - Retrieve a problem - Retrieve all problems - Update a problem - Product - Create a product - Delete a product - Retrieve a product - Retrieve all products - Update a product - Release - Create a release - Delete a release - Retrieve a release - Retrieve all releases - Update a release - Requester - Create a requester - Delete a requester - Retrieve a requester - Retrieve all requesters - Update a requester - Requester Group - Create a requester group - Delete a requester group - Retrieve a requester group - Retrieve all requester groups - Update a requester group - Software - Create a software application - Delete a software application - Retrieve a software application - Retrieve all software applications - Update a software application - Ticket - Create a ticket - Delete a ticket - Retrieve a ticket - Retrieve all tickets - Update a ticket ## Templates and examples [Browse Freshservice integration templates](https://n8n.io/integrations/freshservice/), or [search all templates](https://n8n.io/workflows/) # Freshworks CRM node Use the Freshworks CRM node to automate work in Freshworks CRM, and integrate Freshworks CRM with other applications. n8n has built-in support for a wide range of Freshworks CRM features, including creating, updating, deleting, and retrieve, accounts, appointments, contacts, deals, notes, sales activity and more. On this page, you'll find a list of operations the Freshworks CRM node supports and links to more resources. Credentials Refer to [Freshworks CRM credentials](../../credentials/freshworkscrm/) for guidance on setting up authentication. ## Operations - Account - Create an account - Delete an account - Retrieve an account - Retrieve all accounts - Update an account - Appointment - Create an appointment - Delete an appointment - Retrieve an appointment - Retrieve all appointments - Update an appointment - Contact - Create a contact - Delete a contact - Retrieve a contact - Retrieve all contacts - Update a contact - Deal - Create a deal - Delete a deal - Retrieve a deal - Retrieve all deals - Update a deal - Note - Create a note - Delete a note - Update a note - Sales Activity - Retrieve a sales activity - Retrieve all sales activities - Task - Create a task - Delete a task - Retrieve a task - Retrieve all tasks - Update a task ## Templates and examples [Browse Freshworks CRM integration templates](https://n8n.io/integrations/freshworks-crm/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # GetResponse node Use the GetResponse node to automate work in GetResponse, and integrate GetResponse with other applications. n8n has built-in support for a wide range of GetResponse features, including creating, updating, deleting, and getting contacts. On this page, you'll find a list of operations the GetResponse node supports and links to more resources. Credentials Refer to [GetResponse credentials](../../credentials/getresponse/) for guidance on setting up authentication. ## Operations - Contact - Create a new contact - Delete a contact - Get a contact - Get all contacts - Update contact properties ## Templates and examples **Add subscribed customers to Airtable automatically** by Harshil Agrawal [View template details](https://n8n.io/workflows/933-add-subscribed-customers-to-airtable-automatically/) **Get all the contacts from GetResponse and update them** by Harshil Agrawal [View template details](https://n8n.io/workflows/778-get-all-the-contacts-from-getresponse-and-update-them/) **🛠️ GetResponse Tool MCP Server 💪 5 operations** by David Ashby [View template details](https://n8n.io/workflows/5267-getresponse-tool-mcp-server-5-operations/) [Browse GetResponse integration templates](https://n8n.io/integrations/getresponse/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Ghost node Use the Ghost node to automate work in Ghost, and integrate Ghost with other applications. n8n has built-in support for a wide range of Ghost features, including creating, updating, deleting, and getting posts for the Admin and content API. On this page, you'll find a list of operations the Ghost node supports and links to more resources. Credentials Refer to [Ghost credentials](../../credentials/ghost/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations ### Admin API - **Post** - Create a post - Delete a post - Get a post - Get all posts - Update a post ### Content API - **Post** - Get a post - Get all posts ## Templates and examples **Multi-Agent PDF-to-Blog Content Generation** by Derek Cheung [View template details](https://n8n.io/workflows/2457-multi-agent-pdf-to-blog-content-generation/) **📄🌐PDF2Blog - Create Blog Post on Ghost CRM from PDF Document** by Joseph LePage [View template details](https://n8n.io/workflows/2522-pdf2blog-create-blog-post-on-ghost-crm-from-pdf-document/) **✍️ AI agent to create Linkedin posts for blog promotion with GPT-4o** by Samir Saci [View template details](https://n8n.io/workflows/3500-ai-agent-to-create-linkedin-posts-for-blog-promotion-with-gpt-4o/) [Browse Ghost integration templates](https://n8n.io/integrations/ghost/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # GitHub node Use the GitHub node to automate work in GitHub, and integrate GitHub with other applications. n8n has built-in support for a wide range of GitHub features, including creating, updating, deleting, and editing files, repositories, issues, releases, and users. On this page, you'll find a list of operations the GitHub node supports and links to more resources. Credentials Refer to [GitHub credentials](../../credentials/github/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - File - Create - Delete - Edit - Get - List - Issue - Create - Create Comment - Edit - Get - Lock - Organization - Get Repositories - Release - Create - Delete - Get - Get Many - Update - Repository - Get - Get Issues - Get License - Get Profile - Get Pull Requests - List Popular Paths - List Referrers - Review - Create - Get - Get Many - Update - User - Get Repositories - Invite - Workflow - Disable - Dispatch - Enable - Get - Get Usage - List ## Templates and examples **Back Up Your n8n Workflows To Github** by Jonathan [View template details](https://n8n.io/workflows/1534-back-up-your-n8n-workflows-to-github/) **Building RAG Chatbot for Movie Recommendations with Qdrant and Open AI** by Jenny [View template details](https://n8n.io/workflows/2440-building-rag-chatbot-for-movie-recommendations-with-qdrant-and-open-ai/) **Chat with GitHub API Documentation: RAG-Powered Chatbot with Pinecone & OpenAI** by Mihai Farcas [View template details](https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/) [Browse GitHub integration templates](https://n8n.io/integrations/github/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # GitLab node Use the GitLab node to automate work in GitLab, and integrate GitLab with other applications. n8n has built-in support for a wide range of GitLab features, including creating, updating, deleting, and editing issues, repositories, releases and users. On this page, you'll find a list of operations the GitLab node supports and links to more resources. Credentials Refer to [GitLab credentials](../../credentials/gitlab/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - File - Create - Delete - Edit - Get - List - Issue - Create a new issue - Create a new comment on an issue - Edit an issue - Get the data of a single issue - Lock an issue - Release - Create a new release - Delete a new release - Get a new release - Get all releases - Update a new release - Repository - Get the data of a single repository - Returns issues of a repository - User - Returns the repositories of a user ## Templates and examples **ChatGPT Automatic Code Review in Gitlab MR** by assert [View template details](https://n8n.io/workflows/2167-chatgpt-automatic-code-review-in-gitlab-mr/) **Save your workflows into a Gitlab repository** by Julien DEL RIO [View template details](https://n8n.io/workflows/2385-save-your-workflows-into-a-gitlab-repository/) **GitLab Merge Request Review & Risk Analysis with Claude/GPT AI** by Vishal Kumar [View template details](https://n8n.io/workflows/3997-gitlab-merge-request-review-and-risk-analysis-with-claudegpt-ai/) [Browse GitLab integration templates](https://n8n.io/integrations/gitlab/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [GitLab's documentation](https://docs.gitlab.com/ee/api/rest/) for more information about the service. n8n provides a trigger node for GitLab. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.gitlabtrigger/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Gong node Use the Gong node to automate work in Gong and integrate Gong with other applications. n8n has built-in support for a wide range of Gong features, which includes getting one or more calls and users. On this page, you'll find a list of operations the Gong node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/gong/). ## Operations - Call - Get - Get Many - User - Get - Get Many ## Templates and examples **CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync** by Angel Menendez [View template details](https://n8n.io/workflows/3035-callforge-05-gongio-call-analysis-with-azure-ai-and-crm-sync/) **CallForge - 04 - AI Workflow for Gong.io Sales Calls** by Angel Menendez [View template details](https://n8n.io/workflows/3034-callforge-04-ai-workflow-for-gongio-sales-calls/) **CallForge - 06 - Automate Sales Insights with Gong.io, Notion & AI** by Angel Menendez [View template details](https://n8n.io/workflows/3036-callforge-06-automate-sales-insights-with-gongio-notion-and-ai/) [Browse Gong integration templates](https://n8n.io/integrations/gong/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Gong's documentation](https://gong.app.gong.io/settings/api/documentation) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Ads node Use the Google Ads node to automate work in Google Ads, and integrate Google Ads with other applications. n8n has built-in support for a wide range of Google Ads features, including getting campaigns. On this page, you'll find a list of operations the Google Ads node supports and links to more resources. Credentials Refer to [Google Ads credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - Campaign - Get all campaigns - Get a campaign ## Templates and examples **AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2783-ai-marketing-report-google-analytics-and-ads-meta-ads-sent-via-emailtelegram/) **Generating New Keywords and their Search Volumes using the Google Ads API** by Zacharia Kimotho [View template details](https://n8n.io/workflows/2695-generating-new-keywords-and-their-search-volumes-using-the-google-ads-api/) **Get Meta Ads insights and save them into Google Sheets** by Solomon [View template details](https://n8n.io/workflows/2714-get-meta-ads-insights-and-save-them-into-google-sheets/) [Browse Google Ads integration templates](https://n8n.io/integrations/google-ads/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Google Ads' documentation](https://developers.google.com/google-ads/api/docs/start) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Analytics node Use the Google Analytics node to automate work in Google Analytics, and integrate Google Analytics with other applications. n8n has built-in support for a wide range of Google Analytics features, including returning reports and user activities. On this page, you'll find a list of operations the Google Analytics node supports and links to more resources. Credentials Refer to [Google Analytics credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Report - Get - User Activity - Search ## Templates and examples **AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2783-ai-marketing-report-google-analytics-and-ads-meta-ads-sent-via-emailtelegram/) **Automate Google Analytics Reporting** by Alex Kim [View template details](https://n8n.io/workflows/2549-automate-google-analytics-reporting/) **Create a Google Analytics Data Report with AI and sent it to E-Mail and Telegram** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2673-create-a-google-analytics-data-report-with-ai-and-sent-it-to-e-mail-and-telegram/) [Browse Google Analytics integration templates](https://n8n.io/integrations/google-analytics/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Google Analytics' documentation](https://developers.google.com/analytics) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google BigQuery node Use the Google BigQuery node to automate work in Google BigQuery, and integrate Google BigQuery with other applications. n8n has built-in support for a wide range of Google BigQuery features, including creating, and retrieving records. On this page, you'll find a list of operations the Google BigQuery node supports and links to more resources. Credentials Refer to [Google BigQuery credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Execute Query - Insert ## Templates and examples **🗼 AI powered supply chain control tower with BigQuery and GPT-4o** by Samir Saci [View template details](https://n8n.io/workflows/3305-ai-powered-supply-chain-control-tower-with-bigquery-and-gpt-4o/) **Send location updates of the ISS every minute to a table in Google BigQuery** by Harshil Agrawal [View template details](https://n8n.io/workflows/1049-send-location-updates-of-the-iss-every-minute-to-a-table-in-google-bigquery/) **Auto-Generate And Post Tweet Threads Based On Google Trends Using Gemini AI** by Amjid Ali [View template details](https://n8n.io/workflows/3978-auto-generate-and-post-tweet-threads-based-on-google-trends-using-gemini-ai/) [Browse Google BigQuery integration templates](https://n8n.io/integrations/google-bigquery/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Google BigQuery's documentation](https://cloud.google.com/bigquery/docs/reference/rest) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Books node Use the Google Books node to automate work in Google Books, and integrate Google Books with other applications. n8n has built-in support for a wide range of Google Books features, including retrieving a specific bookshelf resource for the specified user, adding volume to a bookshelf, and getting volume. On this page, you'll find a list of operations the Google Books node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - Bookshelf - Retrieve a specific bookshelf resource for the specified user - Get all public bookshelf resource for the specified user - Bookshelf Volume - Add a volume to a bookshelf - Clears all volumes from a bookshelf - Get all volumes in a specific bookshelf for the specified user - Moves a volume within a bookshelf - Removes a volume from a bookshelf - Volume - Get a volume resource based on ID - Get all volumes filtered by query ## Templates and examples **Scrape Books from URL with Dumpling AI, Clean HTML, Save to Sheets, Email as CSV** by Yang [View template details](https://n8n.io/workflows/3701-scrape-books-from-url-with-dumpling-ai-clean-html-save-to-sheets-email-as-csv/) **Get a volume and add it to your bookshelf** by Harshil Agrawal [View template details](https://n8n.io/workflows/771-get-a-volume-and-add-it-to-your-bookshelf/) **Transform Books into 100+ Social Media Posts with DeepSeek AI and Google Drive** by Abdellah Homrani [View template details](https://n8n.io/workflows/5156-transform-books-into-100-social-media-posts-with-deepseek-ai-and-google-drive/) [Browse Google Books integration templates](https://n8n.io/integrations/google-books/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Business Profile node Use the Google Business Profile node to automate work in Google Business Profile and integrate Google Business Profile with other applications. n8n has built-in support for a wide range of Google Business Profile features, which includes creating, updating, and deleting posts and reviews. On this page, you'll find a list of operations the Google Business Profile node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/google/). ## Operations - Post - Create - Delete - Get - Get Many - Update - Review - Delete Reply - Get - Get Many - Reply ## Templates and examples **Automate Google My Business responses with Gemini AI and Google Sheets tracking** by Malik Hashir [View template details](https://n8n.io/workflows/11503-automate-google-my-business-responses-with-gemini-ai-and-google-sheets-tracking/) **🛠️ Google Business Profile Tool MCP Server 💪 all 9 operations** by David Ashby [View template details](https://n8n.io/workflows/5250-google-business-profile-tool-mcp-server-all-9-operations/) **Automated Google Business Reports with GPT Insights to Slack & Email** by Peyton Leveillee [View template details](https://n8n.io/workflows/9290-automated-google-business-reports-with-gpt-insights-to-slack-and-email/) [Browse Google Business Profile integration templates](https://n8n.io/integrations/google-business-profile/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for Google Business Profile. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.googlebusinessprofiletrigger/). Refer to [Google Business Profile's documentation](https://developers.google.com/my-business/reference/rest) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Chat node Use the Google Chat node to automate work in Google Chat, and integrate Google Chat with other applications. n8n has built-in support for a wide range of Google Chat features, including getting membership and spaces, as well as creating and deleting messages. On this page, you'll find a list of operations the Google Chat node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - Member - Get a membership - Get all memberships in a space - Message - Create a message - Delete a message - Get a message - Send and Wait for Response - Update a message - Space - Get a space - Get all spaces the caller is a member of ## Waiting for a response By choosing the **Send and Wait for a Response** operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information. ### Response Type You can choose between the following types of waiting and approval actions: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types: - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. - **Append n8n Attribution**: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off). ### Approval response customization When using the Approval response type, you can choose whether to present only an approval button or both approval *and* disapproval buttons. You can also customize the button labels for the buttons you include. ### Free Text response customization When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label. ### Custom Form response customization When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. You'll also be able to customize the message button label, the form title and description, and the response button label. ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) [Browse Google Chat integration templates](https://n8n.io/integrations/google-chat/), or [search all templates](https://n8n.io/workflows/) # Google Cloud Firestore node Use the Google Cloud Firestore node to automate work in Google Cloud Firestore, and integrate Google Cloud Firestore with other applications. n8n has built-in support for a wide range of Google Cloud Firestore features, including creating, deleting, and getting documents. On this page, you'll find a list of operations the Google Cloud Firestore node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Document - Create a document - Create/Update a document - Delete a document - Get a document - Get all documents from a collection - Runs a query against your documents - Collection - Get all root collections ## Templates and examples **Create, update, and get a document in Google Cloud Firestore** by Harshil Agrawal [View template details](https://n8n.io/workflows/839-create-update-and-get-a-document-in-google-cloud-firestore/) **🛠️ Google Cloud Firestore Tool MCP Server 💪 all 7 operations** by David Ashby [View template details](https://n8n.io/workflows/5252-google-cloud-firestore-tool-mcp-server-all-7-operations/) **Automated AI News Curation and LinkedIn Posting with GPT-5 and Firebase** by Arthur Dimeglio [View template details](https://n8n.io/workflows/9886-automated-ai-news-curation-and-linkedin-posting-with-gpt-5-and-firebase/) [Browse Google Cloud Firestore integration templates](https://n8n.io/integrations/google-cloud-firestore/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Cloud Natural Language node Use the Google Cloud Natural Language node to automate work in Google Cloud Natural Language, and integrate Google Cloud Natural Language with other applications. n8n has built-in support for a wide range of Google Cloud Natural Language features, including analyzing documents. On this page, you'll find a list of operations the Google Cloud Natural Language node supports and links to more resources. Credentials Refer to [Google Cloud Natural Language credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - Document - Analyze Sentiment ## Templates and examples **ETL pipeline for text processing** by Lorena [View template details](https://n8n.io/workflows/1045-etl-pipeline-for-text-processing/) **Automate testimonials in Strapi with n8n** by Tom [View template details](https://n8n.io/workflows/1535-automate-testimonials-in-strapi-with-n8n/) **Add positive feedback messages to a table in Notion** by Harshil Agrawal [View template details](https://n8n.io/workflows/1109-add-positive-feedback-messages-to-a-table-in-notion/) [Browse Google Cloud Natural Language integration templates](https://n8n.io/integrations/google-cloud-natural-language/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Cloud Realtime Database node Use the Google Cloud Realtime Database node to automate work in Google Cloud Realtime Database, and integrate Google Cloud Realtime Database with other applications. n8n has built-in support for a wide range of Google Cloud Realtime Database features, including writing, deleting, getting, and appending databases. On this page, you'll find a list of operations the Google Cloud Realtime Database node supports and links to more resources. Credentials Refer to [Google Cloud Realtime Database credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Write data to a database - Delete data from a database - Get a record from a database - Append to a list of data - Update item on a database ## Templates and examples [Browse Google Cloud Realtime Database integration templates](https://n8n.io/integrations/google-cloud-realtime-database/), or [search all templates](https://n8n.io/workflows/) # Google Cloud Storage node Use the Google Cloud Storage node to automate work in Google Cloud Storage, and integrate Google Cloud Storage with other applications. n8n has built-in support for a wide range of Google Cloud Storage features, including creating, updating, deleting, and getting buckets and objects. On this page, you'll find a list of operations the Google Cloud Storage node supports and links to more resources. Credentials Refer to [Google Cloud Storage credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - Bucket - Create - Delete - Get - Get Many - Update - Object - Create - Delete - Get - Get Many - Update ## Templates and examples **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) **Automatic Youtube Shorts Generator** by Samautomation.work [View template details](https://n8n.io/workflows/2856-automatic-youtube-shorts-generator/) **Vector Database as a Big Data Analysis Tool for AI Agents [1/3 anomaly][1/2 KNN]** by Jenny [View template details](https://n8n.io/workflows/2654-vector-database-as-a-big-data-analysis-tool-for-ai-agents-13-anomaly12-knn/) [Browse Google Cloud Storage integration templates](https://n8n.io/integrations/google-cloud-storage/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to Google's [Cloud Storage API documentation](https://cloud.google.com/storage/docs/apis) for detailed information about the API that this node integrates with. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Contacts node Use the Google Contacts node to automate work in Google Contacts, and integrate Google Contacts with other applications. n8n has built-in support for a wide range of Google Contacts features, including creating, updating, retrieving, deleting, and getting contacts. On this page, you'll find a list of operations the Google Contacts node supports and links to more resources. Credentials Refer to [Google Contacts credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Contact - Create a contact - Delete a contact - Get a contact - Retrieve all contacts - Update a contact ## Templates and examples **Manage contacts in Google Contacts** by Harshil Agrawal [View template details](https://n8n.io/workflows/637-manage-contacts-in-google-contacts/) **Daily Birthday Reminders from Google Contacts to Slack** by WeblineIndia [View template details](https://n8n.io/workflows/2731-daily-birthday-reminders-from-google-contacts-to-slack/) **Enrich Google Sheet contacts with Dropcontact** by Pauline [View template details](https://n8n.io/workflows/1304-enrich-google-sheet-contacts-with-dropcontact/) [Browse Google Contacts integration templates](https://n8n.io/integrations/google-contacts/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Docs node Use the Google Docs node to automate work in Google Docs, and integrate Google Docs with other applications. n8n has built-in support for a wide range of Google Docs features, including creating, updating, and getting documents. On this page, you'll find a list of operations the Google Docs node supports and links to more resources. Credentials Refer to [Google Docs credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Document - Create - Get - Update ## Templates and examples **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) **✨🩷Automated Social Media Content Publishing Factory + System Prompt Composition** by Joseph LePage [View template details](https://n8n.io/workflows/3135-automated-social-media-content-publishing-factory-system-prompt-composition/) [Browse Google Docs integration templates](https://n8n.io/integrations/google-docs/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Perspective node Use the Google Perspective node to automate work in Google Perspective, and integrate Google Perspective with other applications. n8n has built-in support for a wide range of Google Perspective features, including analyzing comments. On this page, you'll find a list of operations the Google Perspective node supports and links to more resources. Credentials Refer to [Google Perspective credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - Analyze Comment ## Templates and examples [Browse Google Perspective integration templates](https://n8n.io/integrations/google-perspective/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Slides node Use the Google Slides node to automate work in Google Slides, and integrate Google Slides with other applications. n8n has built-in support for a wide range of Google Slides features, including creating presentations, and getting pages. On this page, you'll find a list of operations the Google Slides node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Page - Get a page - Get a thumbnail - Presentation - Create a presentation - Get a presentation - Get presentation slides - Replace text in a presentation ## Templates and examples **AI-Powered Post-Sales Call Automated Proposal Generator** by Gerald Denor [View template details](https://n8n.io/workflows/4359-ai-powered-post-sales-call-automated-proposal-generator/) **Dynamically replace images in Google Slides via API** by Emmanuel Bernard - n8n Expert Lausanne [View template details](https://n8n.io/workflows/2244-dynamically-replace-images-in-google-slides-via-api/) **Get all the slides from a presentation and get thumbnails of pages** by Harshil Agrawal [View template details](https://n8n.io/workflows/1035-get-all-the-slides-from-a-presentation-and-get-thumbnails-of-pages/) [Browse Google Slides integration templates](https://n8n.io/integrations/google-slides/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Tasks node Use the Google Tasks node to automate work in Google Tasks, and integrate Google Tasks with other applications. n8n has built-in support for a wide range of Google Tasks features, including adding, updating, and retrieving contacts. On this page, you'll find a list of operations the Google Tasks node supports and links to more resources. Credentials Refer to [Google Tasks credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Task - Add a task to task list - Delete a task - Retrieve a task - Retrieve all tasks from a task list - Update a task ## Templates and examples **Automate Image Validation Tasks using AI Vision** by Jimleuk [View template details](https://n8n.io/workflows/2420-automate-image-validation-tasks-using-ai-vision/) **Add Project Tasks to Google Sheets with GPT-4.1-mini Chat Assistant** by Robert Breen [View template details](https://n8n.io/workflows/10230-add-project-tasks-to-google-sheets-with-gpt-41-mini-chat-assistant/) **Sync Google Calendar tasks to Trello every day** by Angel Menendez [View template details](https://n8n.io/workflows/1118-sync-google-calendar-tasks-to-trello-every-day/) [Browse Google Tasks integration templates](https://n8n.io/integrations/google-tasks/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Translate node Use the Google Translate node to automate work in Google Translate, and integrate Google Translate with other applications. n8n has built-in support for a wide range of Google Translate features, including translating languages. On this page, you'll find a list of operations the Google Translate node supports and links to more resources. Credentials Refer to [Google Translate credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Language - Translate data ## Templates and examples **Translate PDF documents from Google drive folder with DeepL** by Milorad Filipovic [View template details](https://n8n.io/workflows/2179-translate-pdf-documents-from-google-drive-folder-with-deepl/) **🉑 Generate Anki flash cards for language learning with Google Translate and GPT** by Samir Saci [View template details](https://n8n.io/workflows/3195-generate-anki-flash-cards-for-language-learning-with-google-translate-and-gpt/) **Translate text from English to German** by Harshil Agrawal [View template details](https://n8n.io/workflows/743-translate-text-from-english-to-german/) [Browse Google Translate integration templates](https://n8n.io/integrations/google-translate/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Gotify node Use the Gotify node to automate work in Gotify, and integrate Gotify with other applications. n8n has built-in support for a wide range of Gotify features, including creating, deleting, and getting messages. On this page, you'll find a list of operations the Gotify node supports and links to more resources. Credentials Refer to [Gotify credentials](../../credentials/gotify/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Message - Create - Delete - Get All ## Templates and examples **Send daily weather updates via a message using the Gotify node** by Harshil Agrawal [View template details](https://n8n.io/workflows/774-send-daily-weather-updates-via-a-message-using-the-gotify-node/) **Spotify Sync Liked Songs to Playlist** by Dustin [View template details](https://n8n.io/workflows/2634-spotify-sync-liked-songs-to-playlist/) **🛠️ Gotify Tool MCP Server** by David Ashby [View template details](https://n8n.io/workflows/5246-gotify-tool-mcp-server/) [Browse Gotify integration templates](https://n8n.io/integrations/gotify/), or [search all templates](https://n8n.io/workflows/) # GoToWebinar node Use the GoToWebinar node to automate work in GoToWebinar, and integrate GoToWebinar with other applications. n8n has built-in support for a wide range of GoToWebinar features, including creating, getting, and deleting attendees, organizers, and registrants. On this page, you'll find a list of operations the GoToWebinar node supports and links to more resources. Credentials Refer to [GoToWebinar credentials](../../credentials/gotowebinar/) for guidance on setting up authentication. ## Operations - Attendee - Get - Get All - Get Details - Co-Organizer - Create - Delete - Get All - Re-invite - Panelist - Create - Delete - Get All - Re-invite - Registrant - Create - Delete - Get - Get All - Session - Get - Get All - Get Details - Webinar - Create - Get - Get All - Update ## Templates and examples [Browse GoToWebinar integration templates](https://n8n.io/integrations/gotowebinar/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Grafana node Use the Grafana node to automate work in Grafana, and integrate Grafana with other applications. n8n has built-in support for a wide range of Grafana features, including creating, updating, deleting, and getting dashboards, teams, and users. On this page, you'll find a list of operations the Grafana node supports and links to more resources. Credentials Refer to [Grafana credentials](../../credentials/grafana/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Dashboard - Create a dashboard - Delete a dashboard - Get a dashboard - Get all dashboards - Update a dashboard - Team - Create a team - Delete a team - Get a team - Retrieve all teams - Update a team - Team Member - Add a member to a team - Retrieve all team members - Remove a member from a team - User - Delete a user from the current organization - Retrieve all users in the current organization - Update a user in the current organization ## Templates and examples **Set DevOps Infrastructure with Docker, K3s, Jenkins & Grafana for Linux Servers** by Oneclick AI Squad [View template details](https://n8n.io/workflows/6140-set-devops-infrastructure-with-docker-k3s-jenkins-and-grafana-for-linux-servers/) **🛠️ Grafana Tool MCP Server 💪 all 16 operations** by David Ashby [View template details](https://n8n.io/workflows/5245-grafana-tool-mcp-server-all-16-operations/) **Deploy Docker Grafana, API Backend for WHMCS/WISECP** by PUQcloud [View template details](https://n8n.io/workflows/4011-deploy-docker-grafana-api-backend-for-whmcswisecp/) [Browse Grafana integration templates](https://n8n.io/integrations/grafana/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Grist node Use the Grist node to automate work in Grist, and integrate Grist with other applications. n8n has built-in support for a wide range of Grist features, including creating, updating, deleting, and reading rows in a table. On this page, you'll find a list of operations the Grist node supports and links to more resources. Credentials Refer to [Grist credentials](../../credentials/grist/) for guidance on setting up authentication. ## Operations - Create rows in a table - Delete rows from a table - Read rows from a table - Update rows in a table ## Templates and examples [Browse Grist integration templates](https://n8n.io/integrations/grist/), or [search all templates](https://n8n.io/workflows/) ## Get the Row ID To update or delete a particular record, you need the Row ID. There are two ways to get the Row ID: **Create a Row ID column in Grist** Create a new column in your Grist table with the formula `$id`. **Use the Get All operation** The **Get All** operation returns the Row ID of each record along with the fields. You can get it with the expression `{{$("GristNodeName").item.json.id}}`. ## Filter records when using the Get All operation - Select **Add Option** and select **Filter** from the dropdown list. - You can add filters for any number of columns. The result will only include records which match all the columns. - For each column, you can enter any number of values separated by commas. The result will include records which match any of the values for that column. # Google Workspace Admin node Use the Google Workspace Admin node to automate work in Google Workspace Admin, and integrate Google Workspace Admin with other applications. n8n has built-in support for a wide range of Google Workspace Admin features, including creating, updating, deleting, and getting users, groups, and ChromeOS devices. On this page, you'll find a list of operations the Google Workspace Admin node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - ChromeOS Device - Get a ChromeOS device - Get many ChromeOS devices - Update a ChromeOS device - Change the status of a ChromeOS device - Group - Create a group - Delete a group - Get a group - Get many groups - Update a group - User - Add an existing user to a group - Create a user - Delete a user - Get a user - Get many users - Remove a user from a group - Update a user ## Templates and examples **Manage users using the G Suite Admin node** by Harshil Agrawal [View template details](https://n8n.io/workflows/710-manage-users-using-the-g-suite-admin-node/) **🛠️ Google Workspace Admin Tool MCP Server 💪 all 16 operations** by David Ashby [View template details](https://n8n.io/workflows/5251-google-workspace-admin-tool-mcp-server-all-16-operations/) **Automate employee onboarding and Google Workspace account creation with Gmail, Google Sheets, PDFBro and Google Gemini** by iamvaar [View template details](https://n8n.io/workflows/13145-automate-employee-onboarding-and-google-workspace-account-creation-with-gmail-google-sheets-pdfbro-and-google-gemini/) [Browse Google Workspace Admin integration templates](https://n8n.io/integrations/google-workspace-admin/), or [search all templates](https://n8n.io/workflows/) ## How to control which custom fields to fetch for a user There are three different ways to control which custom fields to retrieve when getting a user's information. Use the **Custom Fields** parameter to select one of the following: - **Don't Include**: Doesn't include any custom fields. - **Custom**: Includes the custom fields from schemas in **Custom Schema Names or IDs**. - **Include All**: Include all the fields associated with the user. To include custom fields, follow these steps: 1. Select **Custom** from the **Custom Fields** dropdown list. 1. Select the schema names you want to include in the **Custom Schema Names or IDs** dropdown list. # Hacker News node Use the Hacker News node to automate work in Hacker News, and integrate Hacker News with other applications. n8n has built-in support for a wide range of Hacker News features, including getting articles, and users. On this page, you'll find a list of operations the Hacker News node supports and links to more resources. Credentials This node doesn't require authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - All - Get all items - Article - Get a Hacker News article - User - Get a Hacker News user ## Templates and examples **Hacker News to Video Content** by Alex Kim [View template details](https://n8n.io/workflows/2557-hacker-news-to-video-content/) **AI chat with any data source (using the n8n workflow tool)** by David Roberts [View template details](https://n8n.io/workflows/2026-ai-chat-with-any-data-source-using-the-n8n-workflow-tool/) **Community Insights using Qdrant, Python and Information Extractor** by Jimleuk [View template details](https://n8n.io/workflows/2374-community-insights-using-qdrant-python-and-information-extractor/) [Browse Hacker News integration templates](https://n8n.io/integrations/hacker-news/), or [search all templates](https://n8n.io/workflows/) # HaloPSA node Use the HaloPSA node to automate work in HaloPSA, and integrate HaloPSA with other applications. n8n has built-in support for a wide range of HaloPSA features, including creating, updating, deleting, and getting clients, sites and tickets. On this page, you'll find a list of operations the HaloPSA node supports and links to more resources. Credentials Refer to [HaloPSA credentials](../../credentials/halopsa/) for guidance on setting up authentication. ## Operations - Client - Create a client - Delete a client - Get a client - Get all clients - Update a client - Site - Create a site - Delete a site - Get a site - Get all sites - Update a site - Ticket - Create a ticket - Delete a ticket - Get a ticket - Get all tickets - Update a ticket - User - Create a user - Delete a user - Get a user - Get all users - Update a user ## Templates and examples [Browse HaloPSA integration templates](https://n8n.io/integrations/halopsa/), or [search all templates](https://n8n.io/workflows/) # Harvest node Use the Harvest node to automate work in Harvest, and integrate Harvest with other applications. n8n has built-in support for a wide range of Harvest features, including creating, updating, deleting, and getting clients, contacts, invoices, tasks, expenses, users, and projects. On this page, you'll find a list of operations the Harvest node supports and links to more resources. Credentials Refer to [Harvest credentials](../../credentials/harvest/) for guidance on setting up authentication. ## Operations - Client - Create a client - Delete a client - Get data of a client - Get data of all clients - Update a client - Company - Retrieves the company for the currently authenticated user - Contact - Create a contact - Delete a contact - Get data of a contact - Get data of all contacts - Update a contact - Estimate - Create an estimate - Delete an estimate - Get data of an estimate - Get data of all estimates - Update an estimate - Expense - Get data of an expense - Get data of all expenses - Create an expense - Update an expense - Delete an expense - Invoice - Get data of an invoice - Get data of all invoices - Create an invoice - Update an invoice - Delete an invoice - Project - Create a project - Delete a project - Get data of a project - Get data of all projects - Update a project - Task - Create a task - Delete a task - Get data of a task - Get data of all tasks - Update a task - Time Entries - Create a time entry using duration - Create a time entry using start and end time - Delete a time entry - Delete a time entry's external reference. - Get data of a time entry - Get data of all time entries - Restart a time entry - Stop a time entry - Update a time entry - User - Create a user - Delete a user - Get data of a user - Get data of all users - Get data of authenticated user - Update a user ## Templates and examples **Automated Investor Intelligence: CrunchBase to Google Sheets Data Harvester** by Yaron Been [View template details](https://n8n.io/workflows/4731-automated-investor-intelligence-crunchbase-to-google-sheets-data-harvester/) **Process Shopify new orders with Zoho CRM and Harvest** by Lorena [View template details](https://n8n.io/workflows/1206-process-shopify-new-orders-with-zoho-crm-and-harvest/) **Create a client in Harvest** by tanaypant [View template details](https://n8n.io/workflows/494-create-a-client-in-harvest/) [Browse Harvest integration templates](https://n8n.io/integrations/harvest/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Help Scout node Use the Help Scout node to automate work in Help Scout, and integrate Help Scout with other applications. n8n has built-in support for a wide range of Help Scout features, including creating, updating, deleting, and getting conversations, and customers. On this page, you'll find a list of operations the Help Scout node supports and links to more resources. Credentials Refer to [Help Scout credentials](../../credentials/helpscout/) for guidance on setting up authentication. ## Operations - Conversation - Create a new conversation - Delete a conversation - Get a conversation - Get all conversations - Customer - Create a new customer - Get a customer - Get all customers - Get customer property definitions - Update a customer - Mailbox - Get data of a mailbox - Get all mailboxes - Thread - Create a new chat thread - Get all chat threads ## Templates and examples **Get all mailboxes from Help Scout** by amudhan [View template details](https://n8n.io/workflows/567-get-all-mailboxes-from-help-scout/) **Receive updates when a customer is created in HelpScout** by Harshil Agrawal [View template details](https://n8n.io/workflows/669-receive-updates-when-a-customer-is-created-in-helpscout/) **Draft AI-powered Help Scout replies with HubSpot and SMS customer 360 context** by Ahmed Salama [View template details](https://n8n.io/workflows/15228-draft-ai-powered-help-scout-replies-with-hubspot-and-sms-customer-360-context/) [Browse Help Scout integration templates](https://n8n.io/integrations/helpscout/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # HighLevel node Use the HighLevel node to automate work in HighLevel, and integrate HighLevel with other applications. n8n has built-in support for a wide range of HighLevel features, including creating, updating, deleting, and getting contacts, opportunities, and tasks, as well as booking appointments and getting free time slots in calendars. On this page, you'll find a list of operations the HighLevel node supports and links to more resources. Credentials Refer to [HighLevel credentials](../../credentials/highlevel/) for guidance on setting up authentication. ## Operations - Contact - Create or update - Delete - Get - Get many - Update - Opportunity - Create - Delete - Get - Get many - Update - Task - Create - Delete - Get - Get many - Update - Calendar - Book an appointment - Get free slots ## Templates and examples **High-Level Service Page SEO Blueprint Report Generator** by Custom Workflows AI [View template details](https://n8n.io/workflows/3583-high-level-service-page-seo-blueprint-report-generator/) **Verify mailing address deliverability of new contacts in HighLevel Using Lob** by Belmont Digital [View template details](https://n8n.io/workflows/2171-verify-mailing-address-deliverability-of-new-contacts-in-highlevel-using-lob/) **Create an Automated Customer Support Assistant with GPT-4o and GoHighLevel SMS** by Cyril Nicko Gaspar [View template details](https://n8n.io/workflows/4223-create-an-automated-customer-support-assistant-with-gpt-4o-and-gohighlevel-sms/) [Browse HighLevel integration templates](https://n8n.io/integrations/highlevel/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [HighLevel's API documentation and support forums](https://help.gohighlevel.com/support/solutions/articles/48001060529-highlevel-api) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Home Assistant node Use the Home Assistant node to automate work in Home Assistant, and integrate Home Assistant with other applications. n8n has built-in support for a wide range of Home Assistant features, including getting, creating, and checking camera proxies, configurations, logs, services, and templates. On this page, you'll find a list of operations the Home Assistant node supports and links to more resources. Credentials Refer to [Home Assistant credentials](../../credentials/homeassistant/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Camera Proxy - Get the camera screenshot - Config - Get the configuration - Check the configuration - Event - Create an event - Get all events - Log - Get a log for a specific entity - Get all logs - Service - Call a service within a specific domain - Get all services - State - Create a new record, or update the current one if it already exists (upsert) - Get a state for a specific entity - Get all states - Template - Create a template ## Templates and examples **Turn on a light to a specific color on any update in GitHub repository** by n8n Team [View template details](https://n8n.io/workflows/1856-turn-on-a-light-to-a-specific-color-on-any-update-in-github-repository/) **Birthday and Ephemeris Notification (Google Contact, Telegram & Home Assistant)** by Thibaud [View template details](https://n8n.io/workflows/4462-birthday-and-ephemeris-notification-google-contact-telegram-and-home-assistant/) **📍 Daily Nearby Garage Sales Alerts via Telegram** by Thibaud [View template details](https://n8n.io/workflows/4649-daily-nearby-garage-sales-alerts-via-telegram/) [Browse Home Assistant integration templates](https://n8n.io/integrations/home-assistant/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Home Assistant's documentation](https://developers.home-assistant.io/docs/api/rest/) for more information about the service. # HubSpot node Use the HubSpot node to automate work in HubSpot, and integrate HubSpot with other applications. n8n has built-in support for a wide range of HubSpot features, including creating, updating, deleting, and getting contacts, deals, lists, engagements and companies. On this page, you'll find a list of operations the HubSpot node supports and links to more resources. Credentials Refer to [HubSpot credentials](../../credentials/hubspot/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Contact - Create/Update a contact - Delete a contact - Get a contact - Get all contacts - Get recently created/updated contacts - Search contacts - Contact List - Add contact to a list - Remove a contact from a list - Company - Create a company - Delete a company - Get a company - Get all companies - Get recently created companies - Get recently modified companies - Search companies by domain - Update a company - Deal - Create a deal - Delete a deal - Get a deal - Get all deals - Get recently created deals - Get recently modified deals - Search deals - Update a deal - Engagement - Create an engagement - Delete an engagement - Get an engagement - Get all engagements - Form - Get all fields from a form - Submit data to a form - Ticket - Create a ticket - Delete a ticket - Get a ticket - Get all tickets - Update a ticket ## Templates and examples **Real Estate Lead Generation with BatchData Skip Tracing & CRM Integration** by Preston Zeller [View template details](https://n8n.io/workflows/3666-real-estate-lead-generation-with-batchdata-skip-tracing-and-crm-integration/) **Create HubSpot contacts from LinkedIn post interactions** by Pauline [View template details](https://n8n.io/workflows/1323-create-hubspot-contacts-from-linkedin-post-interactions/) **Update HubSpot when a new invoice is registered in Stripe** by Jonathan [View template details](https://n8n.io/workflows/1468-update-hubspot-when-a-new-invoice-is-registered-in-stripe/) [Browse HubSpot integration templates](https://n8n.io/integrations/hubspot/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Humantic AI node Use the Humantic AI node to automate work in Humantic AI, and integrate Humantic AI with other applications. n8n has built-in support for a wide range of Humantic AI features, including creating, retrieving, and updating profiles. On this page, you'll find a list of operations the Humantic AI node supports and links to more resources. Credentials Refer to [Humantic AI credentials](../../credentials/humanticai/) for guidance on setting up authentication. ## Operations - Profile - Create a profile - Retrieve a profile - Update a profile ## Templates and examples **Enrich and manage candidates data in Notion** by Harshil Agrawal [View template details](https://n8n.io/workflows/1107-enrich-and-manage-candidates-data-in-notion/) **Create, update, and get a profile in Humantic AI** by Harshil Agrawal [View template details](https://n8n.io/workflows/784-create-update-and-get-a-profile-in-humantic-ai/) **Get, Create, Upadte Profiles 🛠️ Humantic AI Tool MCP Server** by David Ashby [View template details](https://n8n.io/workflows/5240-get-create-upadte-profiles-humantic-ai-tool-mcp-server/) [Browse Humantic AI integration templates](https://n8n.io/integrations/humantic-ai/), or [search all templates](https://n8n.io/workflows/) # Hunter node Use the Hunter node to automate work in Hunter, and integrate Hunter with other applications. n8n has built-in support for a wide range of Hunter features, including getting, generating, and verifying email addresses. On this page, you'll find a list of operations the Hunter node supports and links to more resources. Credentials Refer to [Hunter credentials](../../credentials/hunter/) for guidance on setting up authentication. ## Operations - Get every email address found on the internet using a given domain name, with sources - Generate or retrieve the most likely email address from a domain name, a first name and a last name - Verify the deliverability of an email address ## Templates and examples **Find and email ANYONE on LinkedIn with OpenAI, Hunter & Gmail** by Abhijay Vuyyuru [View template details](https://n8n.io/workflows/5015-find-and-email-anyone-on-linkedin-with-openai-hunter-and-gmail/) **Automated Job Hunter: Upwork Opportunity Aggregator & AI-Powered Notifier** by Yaron Been [View template details](https://n8n.io/workflows/4733-automated-job-hunter-upwork-opportunity-aggregator-and-ai-powered-notifier/) **Automatically email great leads when they submit a form and record in HubSpot** by Mutasem [View template details](https://n8n.io/workflows/2122-automatically-email-great-leads-when-they-submit-a-form-and-record-in-hubspot/) [Browse Hunter integration templates](https://n8n.io/integrations/hunter/), or [search all templates](https://n8n.io/workflows/) # Intercom node Use the Intercom node to automate work in Intercom, and integrate Intercom with other applications. n8n has built-in support for a wide range of Intercom features, including creating, updating, deleting, and getting companies, leads, and users. On this page, you'll find a list of operations the Intercom node supports and links to more resources. Credentials Refer to [Intercom credentials](../../credentials/intercom/) for guidance on setting up authentication. ## Operations - Company - Create a new company - Get data of a company - Get data of all companies - Update a company - List company's users - Lead - Create a new lead - Delete a lead - Get data of a lead - Get data of all leads - Update new lead - User - Create a new user - Delete a user - Get data of a user - Get data of all users - Update a user ## Templates and examples **Enrich new Intercom users with contact details and more from ExactBuyer** by Mutasem [View template details](https://n8n.io/workflows/2108-enrich-new-intercom-users-with-contact-details-and-more-from-exactbuyer/) **Create a new user in Intercom** by tanaypant [View template details](https://n8n.io/workflows/464-create-a-new-user-in-intercom/) **Autonomous Customizable Support Chatbot on Intercom + Discord Thread Reports** by Theo Marcadet [View template details](https://n8n.io/workflows/3558-autonomous-customizable-support-chatbot-on-intercom-discord-thread-reports/) [Browse Intercom integration templates](https://n8n.io/integrations/intercom/), or [search all templates](https://n8n.io/workflows/) # Invoice Ninja node Use the Invoice Ninja node to automate work in Invoice Ninja, and integrate Invoice Ninja with other applications. n8n has built-in support for a wide range of Invoice Ninja features, including creating, updating, deleting, and getting clients, expense, invoice, payments and quotes. On this page, you'll find a list of operations the Invoice Ninja node supports and links to more resources. Credentials Refer to [Invoice Ninja credentials](../../credentials/invoiceninja/) for guidance on setting up authentication. ## Operations - Client - Create a new client - Delete a client - Get data of a client - Get data of all clients - Expense - Create a new expense - Delete an expense - Get data of an expense - Get data of all expenses - Invoice - Create a new invoice - Delete a invoice - Email an invoice - Get data of a invoice - Get data of all invoices - Payment - Create a new payment - Delete a payment - Get data of a payment - Get data of all payments - Quote - Create a new quote - Delete a quote - Email an quote - Get data of a quote - Get data of all quotes - Task - Create a new task - Delete a task - Get data of a task - Get data of all tasks ## Templates and examples **Receive updates on a new invoice via Invoice Ninja** by amudhan [View template details](https://n8n.io/workflows/535-receive-updates-on-a-new-invoice-via-invoice-ninja/) **Get multiple clients' data from Invoice Ninja** by amudhan [View template details](https://n8n.io/workflows/534-get-multiple-clients-data-from-invoice-ninja/) **Automate Invoice Creation and Delivery with Google Sheets, Invoice Ninja and Gmail** by Marth - Business Automation [View template details](https://n8n.io/workflows/6447-automate-invoice-creation-and-delivery-with-google-sheets-invoice-ninja-and-gmail/) [Browse Invoice Ninja integration templates](https://n8n.io/integrations/invoice-ninja/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Iterable node Use the Iterable node to automate work in Iterable, and integrate Iterable with other applications. n8n has built-in support for a wide range of Iterable features, including creating users, recording the actions performed by the users, and adding and removing users from the list. On this page, you'll find a list of operations the Iterable node supports and links to more resources. Credentials Refer to [Iterable credentials](../../credentials/iterable/) for guidance on setting up authentication. ## Operations - Event - Record the actions a user perform - User - Create/Update a user - Delete a user - Get a user - User List - Add user to list - Remove a user from a list ## Templates and examples [Browse Iterable integration templates](https://n8n.io/integrations/iterable/), or [search all templates](https://n8n.io/workflows/) # Jenkins node Use the Jenkins node to automate work in Jenkins, and integrate Jenkins with other applications. n8n has built-in support for a wide range of Jenkins features, including listing builds, managing instances, and creating and copying jobs. On this page, you'll find a list of operations the Jenkins node supports and links to more resources. Credentials Refer to [Jenkins credentials](../../credentials/jenkins/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Build - List Builds - Instance - Cancel quiet down state - Put Jenkins in quiet mode, no builds can be started, Jenkins is ready for shutdown - Restart Jenkins immediately on environments where it's possible - Restart Jenkins once no jobs are running on environments where it's possible - Shutdown once no jobs are running - Shutdown Jenkins immediately - Job - Copy a specific job - Create a new job - Trigger a specific job - Trigger a specific job ## Templates and examples [Browse Jenkins integration templates](https://n8n.io/integrations/jenkins/), or [search all templates](https://n8n.io/workflows/) # Jina AI node Use the Jina AI node to automate work in Jina AI and integrate Jina AI with other applications. n8n has built-in support for a wide range of Jina AI features. On this page, you'll find a list of operations the Jina AI node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/jinaai/). ## Operations - **Reader**: - **Read**: Fetches content from a URL and converts it to clean, LLM-friendly formats. - **Search**: Performs a web search using Jina AI and returns the top results as clean, LLM-friendly formats. - **Research**: - **Deep Research**: Research a topic and generate a structured research report. ## Templates and examples **AI Powered Web Scraping with Jina, Google Sheets and OpenAI : the EASY way** by Derek Cheung [View template details](https://n8n.io/workflows/2552-ai-powered-web-scraping-with-jina-google-sheets-and-openai-the-easy-way/) **AI-Powered Information Monitoring with OpenAI, Google Sheets, Jina AI and Slack** by Dataki [View template details](https://n8n.io/workflows/2799-ai-powered-information-monitoring-with-openai-google-sheets-jina-ai-and-slack/) **AI-Powered Research with Jina AI Deep Search** by Leonard [View template details](https://n8n.io/workflows/3068-ai-powered-research-with-jina-ai-deep-search/) [Browse Jina AI integration templates](https://n8n.io/integrations/jina-ai/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Jina AI's reader API documentation](https://r.jina.ai/docs) and [Jina AI's search API documentation](https://s.jina.ai/docs) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Jira Software node Use the Jira Software node to automate work in Jira, and integrate Jira with other applications. n8n has built-in support for a wide range of Jira features, including creating, updating, deleting, and getting issues, and users. On this page, you'll find a list of operations the Jira Software node supports and links to more resources. Credentials Refer to [Jira credentials](../../credentials/jira/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Issue - Get issue changelog - Create a new issue - Delete an issue - Get an issue - Get all issues - Create an email notification for an issue and add it to the mail queue - Return either all transitions or a transition that can be performed by the user on an issue, based on the issue's status - Update an issue - Issue Attachment - Add attachment to issue - Get an attachment - Get all attachments - Remove an attachment - Issue Comment - Add comment to issue - Get a comment - Get all comments - Remove a comment - Update a comment - User - Create a new user. - Delete a user. - Retrieve a user. ## Templates and examples **Automate Customer Support Issue Resolution using AI Text Classifier** by Jimleuk [View template details](https://n8n.io/workflows/2468-automate-customer-support-issue-resolution-using-ai-text-classifier/) **Create a new issue in Jira** by tanaypant [View template details](https://n8n.io/workflows/459-create-a-new-issue-in-jira/) **Analyze & Sort Suspicious Email Contents with ChatGPT** by Angel Menendez [View template details](https://n8n.io/workflows/2666-analyze-and-sort-suspicious-email-contents-with-chatgpt/) [Browse Jira Software integration templates](https://n8n.io/integrations/jira-software/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Related resources Refer to the [official JQL documentation](https://www.atlassian.com/software/jira/guides/expand-jira/jql) about Jira Query Language (JQL) to learn more about it. ## Fetch issues for a specific project The **Get All** operation returns all the issues from Jira. To fetch issues for a particular project, you need to use Jira Query Language (JQL). For example, if you want to receive all the issues of a project named `n8n`, you'd do something like this: - Select **Get All** from the **Operation** dropdown list. - Toggle **Return All** to true. - Select **Add Option** and select **JQL**. - Enter `project=n8n` in the **JQL** field. This query will fetch all the issues in the project named `n8n`. Enter the name of your project instead of `n8n` to fetch all the issues for your project. # Kafka node Use the Kafka node to automate work in Kafka, and integrate Kafka with other applications. n8n has built-in support for a wide range of Kafka features, including sending messages. On this page, you'll find a list of operations the Kafka node supports and links to more resources. Credentials Refer to [Kafka credentials](../../credentials/kafka/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Send message ## Templates and examples [Browse Kafka integration templates](https://n8n.io/integrations/kafka/), or [search all templates](https://n8n.io/workflows/) # Keap node Use the Keap node to automate work in Keap, and integrate Keap with other applications. n8n has built-in support for a wide range of Keap features, including creating, updating, deleting, and getting companies, products, ecommerce orders, emails, and files. On this page, you'll find a list of operations the Keap node supports and links to more resources. Credentials Refer to [Keap credentials](../../credentials/keap/) for guidance on setting up authentication. ## Operations - Company - Create a company - Retrieve all companies - Contact - Create/update a contact - Delete an contact - Retrieve an contact - Retrieve all contacts - Contact Note - Create a note - Delete a note - Get a notes - Retrieve all notes - Update a note - Contact Tag - Add a list of tags to a contact - Delete a contact's tag - Retrieve all contact's tags - Ecommerce Order - Create an ecommerce order - Get an ecommerce order - Delete an ecommerce order - Retrieve all ecommerce orders - Ecommerce Product - Create an ecommerce product - Delete an ecommerce product - Get an ecommerce product - Retrieve all ecommerce product - Email - Create a record of an email sent to a contact - Retrieve all sent emails - Send Email - File - Delete a file - Retrieve all files - Upload a file ## Templates and examples **Verify mailing address deliverability of contacts in Keap/Infusionsoft Using Lob** by Belmont Digital [View template details](https://n8n.io/workflows/2251-verify-mailing-address-deliverability-of-contacts-in-keapinfusionsoft-using-lob/) **Get all contacts from Keap** by amudhan [View template details](https://n8n.io/workflows/553-get-all-contacts-from-keap/) **Receive updates when a new contact is added in Keap** by amudhan [View template details](https://n8n.io/workflows/554-receive-updates-when-a-new-contact-is-added-in-keap/) [Browse Keap integration templates](https://n8n.io/integrations/keap/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Kitemaker node Use the Kitemaker node to automate work in Kitemaker, and integrate Kitemaker with other applications. n8n has built-in support for a wide range of Kitemaker features, including retrieving data on organizations, spaces and users, as well as creating, getting, and updating work items. On this page, you'll find a list of operations the Kitemaker node supports and links to more resources. Credentials Refer to [Kitemaker credentials](../../credentials/kitemaker/) for guidance on setting up authentication. ## Operations - Organization - Retrieve data on the logged-in user's organization. - Space - Retrieve data on all the spaces in the logged-in user's organization. - User - Retrieve data on all the users in the logged-in user's organization. - Work Item - Create - Get - Get All - Update ## Templates and examples [Browse Kitemaker integration templates](https://n8n.io/integrations/kitemaker/), or [search all templates](https://n8n.io/workflows/) # KoboToolbox node Use the KoboToolbox node to automate work in KoboToolbox, and integrate KoboToolbox with other applications. n8n has built-in support for a wide range of KoboToolbox features, including creating, updating, deleting, and getting files, forms, hooks, and submissions. On this page, you'll find a list of operations the KoboToolbox node supports and links to more resources. Credentials Refer to [KoboToolbox credentials](../../credentials/kobotoolbox/) for guidance on setting up authentication. ## Operations - File - Create - Delete - Get - Get Many - Form - Get - Get Many - Redeploy - Hook - Get - Get Many - Logs - Retry All - Retry One - Submission - Delete - Get - Get Many - Get Validation Status - Update Validation Status ## Templates and examples [Browse KoboToolbox integration templates](https://n8n.io/integrations/kobotoolbox/), or [search all templates](https://n8n.io/workflows/) ## Options ### Query Options The Query Submission operation supports query options: - In the main section of the **Parameters** panel: - **Start** controls the index offset to start the query from (to use the API pagination logic). - **Limit** sets the maximum number of records to return. Note that the API always has a limit of 30,000 returned records, whatever value you provide. - In the **Query Options** section, you can activate the following parameters: - **Query** lets you specify filter predicates in MongoDB's JSON query format. For example: `{"status": "success", "_submission_time": {"$lt": "2021-11-01T01:02:03"}}` queries for all submissions with the value `success` for the field `status`, and submitted before November 1st, 2021, 01:02:03. - **Fields** lets you specify the list of fields you want to fetch, to make the response lighter. - **Sort** lets you provide a list of sorting criteria in MongoDB JSON format. For example, `{"status": 1, "_submission_time": -1}` specifies a sort order by ascending status, and then descending submission time. More details about these options can be found in the [Formhub API docs]() ### Submission options All operations that return form submission data offer options to tweak the response. These include: - Download options lets you download any attachment linked to each particular form submissions, such as pictures and videos. It also lets you select the naming pattern, and the file size to download (if available - typically for images). - Formatting options perform some reformatting as described in [About reformatting](#about-reformatting). #### About reformatting The default JSON format for KoboToolbox submission data is sometimes hard to deal with, because it's not schema-aware, and all fields are therefore returned as strings. This node provides a lightweight opinionated reformatting logic, enabled with the **Reformat?** parameter, available on all operations that return form submissions: the submission query, get, and the attachment download operations. When enabled, the reformatting: - Reorganizes the JSON into a multi-level hierarchy following the form's groups. By default, question grouping hierarchy is materialized by a `/` character in the field names, for example `Group1/Question1`. With reformatting enabled, n8n reorganizes these into `Group1.Question1`, as nested JSON objects. - Renames fields to trim `_` (not supported by many downstream systems). - Parses all geospatial fields (Point, Line, and Area question types) into their standard GeoJSON equivalent. - Splits all fields matching any of the **Multiselect Mask** wildcard masks into an array. Since the multi-select fields appear as space-separated strings, they can't be guessed algorithmically, so you must provide a field naming mask. Format the masks as a comma-separated list. Lists support the `*` wildcard. - Converts all fields matching any of the **Number Mask** wildcard masks into a JSON float. Here's a detailed example in JSON: ``` { "_id": 471987, "formhub/uuid": "189436bb09a54957bfcc798e338b54d6", "start": "2021-12-05T16:13:38.527+02:00", "end": "2021-12-05T16:15:33.407+02:00", "Field_Details/Field_Name": "Test Fields", "Field_Details/Field_Location": "-1.932914 30.078211 1421 165", "Field_Details/Field_Shape": "-1.932914 30.078211 1421 165;-1.933011 30.078085 0 0;-1.933257 30.078004 0 0;-1.933338 30.078197 0 0;-1.933107 30.078299 0 0;-1.932914 30.078211 1421 165", "Field_Details/Crops_Grown": "maize beans avocado", "Field_Details/Field_Size_sqm": "2300", "__version__": "veGcULpqP6JNFKRJbbMvMs", "meta/instanceID": "uuid:2356cbbe-c1fd-414d-85c8-84f33e92618a", "_xform_id_string": "ajXVJpBkTD5tB4Nu9QXpgm", "_uuid": "2356cbbe-c1fd-414d-85c8-84f33e92618a", "_attachments": [], "_status": "submitted_via_web", "_geolocation": [ -1.932914, 30.078211 ], "_submission_time": "2021-12-05T14:15:44", "_tags": [], "_notes": [], "_validation_status": {}, "_submitted_by": null } ``` With reformatting enabled, and the appropriate masks for multi-select and number formatting (for example, `Crops_*` and `*_sqm` respectively), n8n parses it into: ``` { "id": 471987, "formhub": { "uuid": "189436bb09a54957bfcc798e338b54d6" }, "start": "2021-12-05T16:13:38.527+02:00", "end": "2021-12-05T16:15:33.407+02:00", "Field_Details": { "Field_Name": "Test Fields", "Field_Location": { "lat": -1.932914, "lon": 30.078211 }, "Field_Shape": { "type": "polygon", "coordinates": [ { "lat": -1.932914, "lon": 30.078211 }, { "lat": -1.933011, "lon": 30.078085 }, { "lat": -1.933257, "lon": 30.078004 }, { "lat": -1.933338, "lon": 30.078197 }, { "lat": -1.933107, "lon": 30.078299 }, { "lat": -1.932914, "lon": 30.078211 } ] }, "Crops_Grown": [ "maize", "beans", "avocado" ], "Field_Size_sqm": 2300 }, "version": "veGcULpqP6JNFKRJbbMvMs", "meta": { "instanceID": "uuid:2356cbbe-c1fd-414d-85c8-84f33e92618a" }, "xform_id_string": "ajXVJpBkTD5tB4Nu9QXpgm", "uuid": "2356cbbe-c1fd-414d-85c8-84f33e92618a", "attachments": [], "status": "submitted_via_web", "geolocation": { "lat": -1.932914, "lon": 30.078211 }, "submission_time": "2021-12-05T14:15:44", "tags": [], "notes": [], "validation_status": {}, "submitted_by": null } ``` ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Lemlist node Use the Lemlist node to automate work in Lemlist, and integrate Lemlist with other applications. n8n has built-in support for a wide range of Lemlist features, including getting activities, teams and campaigns, as well as creating, updating, and deleting leads. On this page, you'll find a list of operations the Lemlist node supports and links to more resources. Credentials Refer to [Lemlist credentials](../../credentials/lemlist/) for guidance on setting up authentication. ## Operations - Activity - Get Many: Get many activities - Campaign - Get Many: Get many campaigns - Get Stats: Get campaign stats - Enrichment - Get: Fetches a previously completed enrichment - Enrich Lead: Enrich a lead using an email or LinkedIn URL - Enrich Person: Enrich a person using an email or LinkedIn URL - Lead - Create: Create a new lead - Delete: Delete an existing lead - Get: Get an existing lead - Unsubscribe: Unsubscribe an existing lead - Team - Get: Get an existing team - Get Credits: Get an existing team's credits - Unsubscribe - Add: Add an email to an unsubscribe list - Delete: Delete an email from an unsubscribe list - Get Many: Get many unsubscribed emails ## Templates and examples **Create HubSpot contacts from LinkedIn post interactions** by Pauline [View template details](https://n8n.io/workflows/1323-create-hubspot-contacts-from-linkedin-post-interactions/) **lemlist \<> GPT-3: Supercharge your sales workflows** by Lucas Perret [View template details](https://n8n.io/workflows/1838-lemlist-lessgreater-gpt-3-supercharge-your-sales-workflows/) **Classify lemlist replies using OpenAI and automate reply handling** by Lucas Perret [View template details](https://n8n.io/workflows/2287-classify-lemlist-replies-using-openai-and-automate-reply-handling/) [Browse Lemlist integration templates](https://n8n.io/integrations/lemlist/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Line node Deprecated: End of service LINE Notify is discontinuing service as of April 1st 2025 and this node will no longer work after that date. View LINE Notify's [end of service announement](https://notify-bot.line.me/closing-announce) for more information. Use the Line node to automate work in Line, and integrate Line with other applications. n8n has built-in support for a wide range of Line features, including sending notifications. On this page, you'll find a list of operations the Line node supports and links to more resources. Credentials Refer to [Line credentials](../../credentials/line/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Notification - Sends notifications to users or groups ## Templates and examples **Line Message API : Push Message & Reply** by darrell_tw [View template details](https://n8n.io/workflows/2733-line-message-api-push-message-and-reply/) **Customer Support Channel and Ticketing System with Slack and Linear** by Jimleuk [View template details](https://n8n.io/workflows/2323-customer-support-channel-and-ticketing-system-with-slack-and-linear/) **Send daily weather updates via a notification in Line** by Harshil Agrawal [View template details](https://n8n.io/workflows/773-send-daily-weather-updates-via-a-notification-in-line/) [Browse Line integration templates](https://n8n.io/integrations/line/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Linear node Use the Linear node to automate work in Linear, and integrate Linear with other applications. n8n has built-in support for a wide range of Linear features, including creating, updating, deleting, and getting issues. On this page, you'll find a list of operations the Linear node supports and links to more resources. Credentials Refer to [Linear credentials](../../credentials/linear/) for guidance on setting up authentication. ## Operations - Comment - Add Comment - Issue - Add Link - Create - Delete - Get - Get Many - Update ## Templates and examples **Customer Support Channel and Ticketing System with Slack and Linear** by Jimleuk [View template details](https://n8n.io/workflows/2323-customer-support-channel-and-ticketing-system-with-slack-and-linear/) **Visual Regression Testing with Apify and AI Vision Model** by Jimleuk [View template details](https://n8n.io/workflows/2419-visual-regression-testing-with-apify-and-ai-vision-model/) **Send alert when data is created in app/database** by n8n Team [View template details](https://n8n.io/workflows/1932-send-alert-when-data-is-created-in-appdatabase/) [Browse Linear integration templates](https://n8n.io/integrations/linear/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # LingvaNex node Use the LingvaNex node to automate work in LingvaNex, and integrate LingvaNex with other applications. n8n has built-in support for translating data with LingvaNex. On this page, you'll find a list of operations the LingvaNex node supports and links to more resources. Credentials Refer to [LingvaNex credentials](../../credentials/lingvanex/) for guidance on setting up authentication. ## Operations - Translate data ## Templates and examples **Get data from Hacker News and send to Airtable or via SMS** by isa024787bel [View template details](https://n8n.io/workflows/888-get-data-from-hacker-news-and-send-to-airtable-or-via-sms/) **Get daily poems in Telegram** by Lorena [View template details](https://n8n.io/workflows/975-get-daily-poems-in-telegram/) **Translate instructions using LingvaNex** by Harshil Agrawal [View template details](https://n8n.io/workflows/797-translate-instructions-using-lingvanex/) [Browse LingvaNex integration templates](https://n8n.io/integrations/lingvanex/), or [search all templates](https://n8n.io/workflows/) # LinkedIn node Use the LinkedIn node to automate work in LinkedIn, and integrate LinkedIn with other applications. n8n supports creating posts. On this page, you'll find a list of operations the LinkedIn node supports and links to more resources. Credentials Refer to [LinkedIn credentials](../../credentials/linkedin/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Post - Create ## Parameters - **Post As**: choose whether to post as a **Person** or **Organization**. - **Person Name or ID** and **Organization URN**: enter an identifier for the person or organization. Posting as organization If posting as an Organization enter the organization number in the URN field. For example, `03262013` not `urn:li:company:03262013`. - **Text**: the post contents. - **Media Category**: use this when including images or article URLs in your post. ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) **✨🩷Automated Social Media Content Publishing Factory + System Prompt Composition** by Joseph LePage [View template details](https://n8n.io/workflows/3135-automated-social-media-content-publishing-factory-system-prompt-composition/) [Browse LinkedIn integration templates](https://n8n.io/integrations/linkedin/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LinkedIn's API documentation](https://learn.microsoft.com/en-us/linkedin/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # LoneScale node Use the LoneScale node to automate work in LoneScale and integrate LoneScale with other applications. n8n has built-in support for managing Lists and Items in LoneScale. On this page, you'll find a list of operations the LoneScale node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/lonescale/). ## Operations - List - Create - Item - Create ## Templates and examples [Browse LoneScale integration templates](https://n8n.io/integrations/lonescale/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LoneScales documentation](https://help-center.lonescale.com/en/articles/6454360-lonescale-public-api) for more information about the service. n8n provides a trigger node for LoneScale. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.lonescaletrigger/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Magento 2 node Use the Magento 2 node to automate work in Magento 2, and integrate Magento 2 with other applications. n8n has built-in support for a wide range of Magento 2 features, including creating, updating, deleting, and getting customers, invoices, orders, and projects. On this page, you'll find a list of operations the Magento 2 node supports and links to more resources. Credentials Refer to [Magento 2 credentials](../../credentials/magento2/) for guidance on setting up authentication. ## Operations - Customer - Create a new customer - Delete a customer - Get a customer - Get all customers - Update a customer - Invoice - Create an invoice - Order - Cancel an order - Get an order - Get all orders - Ship an order - Product - Create a product - Delete a product - Get a product - Get all products - Update a product ## Templates and examples **Automate Your Magento 2 Weekly Sales & Performance Reports** by Kanaka Kishore Kandregula [View template details](https://n8n.io/workflows/6953-automate-your-magento-2-weekly-sales-and-performance-reports/) **Automatic Magento 2 Product & Coupon Alerts to Telegram with Duplicate Protection** by Kanaka Kishore Kandregula [View template details](https://n8n.io/workflows/5893-automatic-magento-2-product-and-coupon-alerts-to-telegram-with-duplicate-protection/) **Daily Magento 2 Customer Sync to Google Contacts & Sheets without Duplicates** by Kanaka Kishore Kandregula [View template details](https://n8n.io/workflows/6783-daily-magento-2-customer-sync-to-google-contacts-and-sheets-without-duplicates/) [Browse Magento 2 integration templates](https://n8n.io/integrations/magento-2/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mailcheck node Use the Mailcheck node to automate work in Mailcheck, and integrate Mailcheck with other applications. n8n has built-in support for a wide range of Mailcheck features, including checking emails. On this page, you'll find a list of operations the Mailcheck node supports and links to more resources. Credentials Refer to [Mailcheck credentials](../../credentials/mailcheck/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Email - Check ## Templates and examples [Browse Mailcheck integration templates](https://n8n.io/integrations/mailcheck/), or [search all templates](https://n8n.io/workflows/) # Mailchimp node Use the Mailchimp node to automate work in Mailchimp, and integrate Mailchimp with other applications. n8n has built-in support for a wide range of Mailchimp features, including creating, updating, and deleting campaigns, as well as getting list groups. On this page, you'll find a list of operations the Mailchimp node supports and links to more resources. Credentials Refer to [Mailchimp credentials](../../credentials/mailchimp/) for guidance on setting up authentication. ## Operations - Campaign - Delete a campaign - Get a campaign - Get all the campaigns - Replicate a campaign - Creates a Resend to Non-Openers version of this campaign - Send a campaign - List Group - Get all groups - Member - Create a new member on list - Delete a member on list - Get a member on list - Get all members on list - Update a new member on list - Member Tag - Add tags from a list member - Remove tags from a list member ## Templates and examples **Process Shopify new orders with Zoho CRM and Harvest** by Lorena [View template details](https://n8n.io/workflows/1206-process-shopify-new-orders-with-zoho-crm-and-harvest/) **Add new contacts from HubSpot to the email list in Mailchimp** by n8n Team [View template details](https://n8n.io/workflows/1770-add-new-contacts-from-hubspot-to-the-email-list-in-mailchimp/) **Send or update new Mailchimp subscribers in HubSpot** by n8n Team [View template details](https://n8n.io/workflows/1771-send-or-update-new-mailchimp-subscribers-in-hubspot/) [Browse Mailchimp integration templates](https://n8n.io/integrations/mailchimp/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # MailerLite node Use the MailerLite node to automate work in MailerLite, and integrate MailerLite with other applications. n8n has built-in support for a wide range of MailerLite features, including creating, updating, deleting, and getting subscribers. On this page, you'll find a list of operations the MailerLite node supports and links to more resources. Credentials Refer to [MailerLite credentials](../../credentials/mailerlite/) for guidance on setting up authentication. ## Operations - Subscriber - Create a new subscriber - Get an subscriber - Get all subscribers - Update an subscriber ## Templates and examples **Create, update and get a subscriber using the MailerLite node** by Harshil Agrawal [View template details](https://n8n.io/workflows/751-create-update-and-get-a-subscriber-using-the-mailerlite-node/) **Receive updates when a subscriber is added to a group in MailerLite** by Harshil Agrawal [View template details](https://n8n.io/workflows/759-receive-updates-when-a-subscriber-is-added-to-a-group-in-mailerlite/) **Capture Gumroad sales, add buyer to MailerLite group, log to Google Sheets CRM** by Aitor | 1Node [View template details](https://n8n.io/workflows/3676-capture-gumroad-sales-add-buyer-to-mailerlite-group-log-to-google-sheets-crm/) [Browse MailerLite integration templates](https://n8n.io/integrations/mailerlite/), or [search all templates](https://n8n.io/workflows/) # Mailgun node Use the Mailgun node to automate work in Mailgun, and integrate Mailgun with other applications. n8n has built-in support for sending emails with Mailgun. On this page, you'll find a list of operations the Mailgun node supports and links to more resources. Credentials Refer to [Mailgun credentials](../../credentials/mailgun/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Send an email ## Templates and examples **Handle errors from a different workflow** by Jan Oberhauser [View template details](https://n8n.io/workflows/8-handle-errors-from-a-different-workflow/) **Report phishing websites to Steam and CloudFlare** by chaufnet [View template details](https://n8n.io/workflows/122-report-phishing-websites-to-steam-and-cloudflare/) **AI Agent Creates Content to Be Picked by ChatGPT, Gemini, Google** by Kritika [View template details](https://n8n.io/workflows/4652-ai-agent-creates-content-to-be-picked-by-chatgpt-gemini-google/) [Browse Mailgun integration templates](https://n8n.io/integrations/mailgun/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mailjet node Use the Mailjet node to automate work in Mailjet, and integrate Mailjet with other applications. n8n has built-in support for a wide range of Mailjet features, including sending emails, and SMS. On this page, you'll find a list of operations the Mailjet node supports and links to more resources. Credentials Refer to [Mailjet credentials](../../credentials/mailjet/) for guidance on setting up authentication. ## Operations - Email - Send an email - Send an email template - SMS - Send an SMS ## Templates and examples **Forward Netflix emails to multiple email addresses with GMail and Mailjet** by Manuel [View template details](https://n8n.io/workflows/2279-forward-netflix-emails-to-multiple-email-addresses-with-gmail-and-mailjet/) **Send an email using Mailjet** by amudhan [View template details](https://n8n.io/workflows/520-send-an-email-using-mailjet/) **Monitor SEO Keyword Rankings with LLaMA AI & Apify Google SERP Scraping** by Gegenfeld [View template details](https://n8n.io/workflows/4301-monitor-seo-keyword-rankings-with-llama-ai-and-apify-google-serp-scraping/) [Browse Mailjet integration templates](https://n8n.io/integrations/mailjet/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mandrill node Use the Mandrill node to automate work in Mandrill, and integrate Mandrill with other applications. n8n supports sending messages based on templates or HTML with Mandrill. On this page, you'll find a list of operations the Mandrill node supports and links to more resources. Credentials Refer to [Mandrill credentials](../../credentials/mandrill/) for guidance on setting up authentication. ## Operations - Message - Send message based on template. - Send message based on HTML. ## Templates and examples [Browse Mandrill integration templates](https://n8n.io/integrations/mandrill/), or [search all templates](https://n8n.io/workflows/) # marketstack node Use the marketstack node to automate work in marketstack, and integrate marketstack with other applications. n8n has built-in support for a wide range of marketstack features, including getting exchanges, end-of-day data, and tickers. On this page, you'll find a list of operations the marketstack node supports and links to more resources. Credentials Refer to [marketstack credentials](../../credentials/marketstack/) for guidance on setting up authentication. ## Operations - End-of-Day Data - Get All - Exchange - Get - Ticker - Get ## Templates and examples **AI-Powered Financial Chart Analyzer | OpenRouter, MarketStack, macOS Shortcuts** by Udit Rawat [View template details](https://n8n.io/workflows/2970-ai-powered-financial-chart-analyzer-or-openrouter-marketstack-macos-shortcuts/) **AI agents can get end of day market data with this Marketstack Tool MCP Server** by David Ashby [View template details](https://n8n.io/workflows/5205-ai-agents-can-get-end-of-day-market-data-with-this-marketstack-tool-mcp-server/) **Detect Stock Price Anomalies & Send News Alerts with Marketstack, HackerNews & DeepL** by noda [View template details](https://n8n.io/workflows/10306-detect-stock-price-anomalies-and-send-news-alerts-with-marketstack-hackernews-and-deepl/) [Browse marketstack integration templates](https://n8n.io/integrations/marketstack/), or [search all templates](https://n8n.io/workflows/) # Matrix node Use the Matrix node to automate work in Matrix, and integrate Matrix with other applications. n8n has built-in support for a wide range of Matrix features, including getting current user's account information, sending media and messages to a room, and getting room members and messages. On this page, you'll find a list of operations the Matrix node supports and links to more resources. Credentials Refer to [Matrix credentials](../../credentials/matrix/) for guidance on setting up authentication. ## Operations - Account - Get current user's account information - Event - Get single event by ID - Media - Send media to a chat room - Message - Send a message to a room - Gets all messages from a room - Room - New chat room with defined settings - Invite a user to a room - Join a new room - Kick a user from a room - Leave a room - Room Member - Get all members ## Templates and examples **Manage room members in Matrix** by Harshil Agrawal [View template details](https://n8n.io/workflows/724-manage-room-members-in-matrix/) **Weekly Coffee Chat (Matrix Version)** by jason [View template details](https://n8n.io/workflows/854-weekly-coffee-chat-matrix-version/) **🛠️ Matrix Tool MCP Server 💪 all 11 operations** by David Ashby [View template details](https://n8n.io/workflows/5185-matrix-tool-mcp-server-all-11-operations/) [Browse Matrix integration templates](https://n8n.io/integrations/matrix/), or [search all templates](https://n8n.io/workflows/) # Mattermost node Use the Mattermost node to automate work in Mattermost, and integrate Mattermost with other applications. n8n has built-in support for a wide range of Mattermost features, including creating, deleting, and getting channels, and users, as well as posting messages, and adding reactions. On this page, you'll find a list of operations the Mattermost node supports and links to more resources. Credentials Refer to [Mattermost credentials](../../credentials/mattermost/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Channel - Add a user to a channel - Create a new channel - Soft delete a channel - Get a page of members for a channel - Restores a soft deleted channel - Search for a channel - Get statistics for a channel - Message - Soft delete a post, by marking the post as deleted in the database - Post a message into a channel - Post an ephemeral message into a channel - Reaction - Add a reaction to a post. - Remove a reaction from a post - Get all the reactions to one or more posts - User - Create a new user - Deactivates the user and revokes all its sessions by archiving its user object. - Retrieve all users - Get a user by email - Get a user by ID - Invite user to team ## Templates and examples **Standup bot (4/4): Worker** by Jonathan [View template details](https://n8n.io/workflows/1475-standup-bot-44-worker/) **Receive a Mattermost message when a user updates their profile on Facebook** by Harshil Agrawal [View template details](https://n8n.io/workflows/785-receive-a-mattermost-message-when-a-user-updates-their-profile-on-facebook/) **Send Instagram statistics to Mattermost** by damien [View template details](https://n8n.io/workflows/812-send-instagram-statistics-to-mattermost/) [Browse Mattermost integration templates](https://n8n.io/integrations/mattermost/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Mattermost's documentation](https://api.mattermost.com/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Channel ID field error If you're not the System Administrator, you might get an error: **there was a problem loading the parameter options from server: "Mattermost error response: You do not have the appropriate permissions.** next to the **Channel ID** field. Ask your system administrator to grant you the `post:channel` permission. ## Find the channel ID To find the channel ID in Mattermost: 1. Select the channel from the left sidebar. 1. Select the channel name at the top. 1. Select **View Info**. # Mautic node Use the Mautic node to automate work in Mautic, and integrate Mautic with other applications. n8n has built-in support for a wide range of Mautic features, including creating, updating, deleting, and getting companies, and contacts, as well as adding and removing campaign contacts. On this page, you'll find a list of operations the Mautic node supports and links to more resources. Credentials Refer to [Mautic credentials](../../credentials/mautic/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Campaign Contact - Add contact to a campaign - Remove contact from a campaign - Company - Create a new company - Delete a company - Get data of a company - Get data of all companies - Update a company - Company Contact - Add contact to a company - Remove a contact from a company - Contact - Create a new contact - Delete a contact - Edit contact's points - Add/remove contacts from/to the don't contact list - Get data of a contact - Get data of all contacts - Send email to contact - Update a contact - Contact Segment - Add contact to a segment - Remove contact from a segment - Segment Email - Send ## Templates and examples **Validate email of new contacts in Mautic** by Jonathan [View template details](https://n8n.io/workflows/1462-validate-email-of-new-contacts-in-mautic/) **Add new customers from WooCommerce to Mautic** by Jonathan [View template details](https://n8n.io/workflows/1456-add-new-customers-from-woocommerce-to-mautic/) **Send sales data from Webhook to Mautic** by rangelstoilov [View template details](https://n8n.io/workflows/467-send-sales-data-from-webhook-to-mautic/) [Browse Mautic integration templates](https://n8n.io/integrations/mautic/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Medium node Use the Medium node to automate work in Medium, and integrate Medium with other applications. n8n has built-in support for a wide range of Medium features, including creating posts, and getting publications. On this page, you'll find a list of operations the Medium node supports and links to more resources. Medium API no longer supported Medium has stopped supporting the Medium API. The Medium node still appears within n8n, but you won't be able to configure new API keys to authenticate with. Refer to [Medium credentials](../../credentials/medium/) for guidance on setting up existing API keys. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Post - Create a post - Publication - Get all publications ## Templates and examples **Cross-post your blog posts** by amudhan [View template details](https://n8n.io/workflows/418-cross-post-your-blog-posts/) **Posting from Wordpress to Medium** by Zacharia Kimotho [View template details](https://n8n.io/workflows/2062-posting-from-wordpress-to-medium/) **Publish a post to a publication on Medium** by Harshil Agrawal [View template details](https://n8n.io/workflows/594-publish-a-post-to-a-publication-on-medium/) [Browse Medium integration templates](https://n8n.io/integrations/medium/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # MessageBird node Use the MessageBird node to automate work in MessageBird, and integrate MessageBird with other applications. n8n has built-in support for a wide range of MessageBird features, including sending messages, and getting balances. On this page, you'll find a list of operations the MessageBird node supports and links to more resources. Credentials Refer to [MessageBird credentials](../../credentials/messagebird/) for guidance on setting up authentication. ## Operations - SMS - Send text messages (SMS) - Balance - Get the balance ## Templates and examples [Browse MessageBird integration templates](https://n8n.io/integrations/messagebird/), or [search all templates](https://n8n.io/workflows/) # Metabase node Use the Metabase node to automate work in Metabase, and integrate Metabase with other applications. n8n has built-in support for a wide range of Metabase features, including adding, and getting alerts, databases, metrics, and questions. On this page, you'll find a list of operations the Metabase node supports and links to more resources. Credentials Refer to [Metabase credentials](../../credentials/metabase/) for guidance on setting up authentication. ## Operations - Alert - Get - Get All - Database - Add - Get All - Get Fields - Metric - Get - Get All - Question - Get - Get All - Result Data ## Templates and examples [Browse Metabase integration templates](https://n8n.io/integrations/metabase/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft Dynamics CRM node Use the Microsoft Dynamics CRM node to automate work in Microsoft Dynamics CRM, and integrate Microsoft Dynamics CRM with other applications. n8n has built-in support for creating, updating, deleting, and getting Microsoft Dynamics CRM accounts. On this page, you'll find a list of operations the Microsoft Dynamics CRM node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. ## Operations - Account - Create - Delete - Get - Get All - Update ## Templates and examples [Browse Microsoft Dynamics CRM integration templates](https://n8n.io/integrations/microsoft-dynamics-crm/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft Entra ID node Use the Microsoft Entra ID node to automate work in Microsoft Entra ID and integrate Microsoft Entra ID with other applications. n8n has built-in support for a wide range of Microsoft Entra ID features, which includes creating, getting, updating, and deleting users and groups, as well as adding users to and removing them from groups. On this page, you'll find a list of operations the Microsoft Entra ID node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/microsoftentra/). Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft Entra ID credentials configuration. ## Operations - **Group** - **Create**: Create a new group - **Delete**: Delete an existing group - **Get**: Retrieve data for a specific group - **Get Many**: Retrieve a list of groups - **Update**: Update a group - **User** - **Create**: Create a new user - **Delete**: Delete an existing user - **Get**: Retrieve data for a specific user - **Get Many**: Retrieve a list of users - **Update**: Update a user - **Add to Group**: Add user to a group - **Remove from Group**: Remove user from a group ## Templates and examples [Browse Microsoft Entra ID integration templates](https://n8n.io/integrations/microsoft-entra-id-azure-active-directory/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Microsoft Entra ID's documentation](https://learn.microsoft.com/en-us/graph/api/resources/identity-network-access-overview?view=graph-rest-1.0) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Common issues Here are some common errors and issues with the Microsoft Entra ID node and steps to resolve or troubleshoot them. ### Updating the Allow External Senders and Auto Subscribe New Members options fails You can't update the **Allow External Senders** and **Auto Subscribe New Members** options directly after creating a new group. You must wait after creating a group before you can change the values of these options. When designing workflows that use multiple Microsoft Entra ID nodes to first create groups and then update these options, add a [Wait](../../core-nodes/n8n-nodes-base.wait/) node between the two operations. A Wait node configured to pause for at least two seconds allows time for the group to fully initialize. After the wait, the update operation can complete without erroring. # Microsoft Excel 365 node Use the Microsoft Excel node to automate work in Microsoft Excel, and integrate Microsoft Excel with other applications. n8n has built-in support for a wide range of Microsoft Excel features, including adding and retrieving lists of table data, and workbooks, as well as getting worksheets. On this page, you'll find a list of operations the Microsoft Excel node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Table - Adds rows to the end of the table - Retrieve a list of table columns - Retrieve a list of table rows - Looks for a specific column value and then returns the matching row - Workbook - Adds a new worksheet to the workbook. - Get data of all workbooks - Worksheet - Get all worksheets - Get worksheet content ## Templates and examples **Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel** by Mihai Farcas [View template details](https://n8n.io/workflows/2275-automated-web-scraping-email-a-csv-save-to-google-sheets-and-microsoft-excel/) **Get all Excel workbooks** by amudhan [View template details](https://n8n.io/workflows/566-get-all-excel-workbooks/) **Daily Newsletter Service using Excel, Outlook and AI** by Jimleuk [View template details](https://n8n.io/workflows/3446-daily-newsletter-service-using-excel-outlook-and-ai/) [Browse Microsoft Excel 365 integration templates](https://n8n.io/integrations/microsoft-excel/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft Graph Security node Use the Microsoft Graph Security node to automate work in Microsoft Graph Security, and integrate Microsoft Graph Security with other applications. n8n has built-in support for a wide range of Microsoft Graph Security features, including getting, and updating scores, and profiles. On this page, you'll find a list of operations the Microsoft Graph Security node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. ## Operations - Secure Score - Get - Get All - Secure Score Control Profile - Get - Get All - Update ## Templates and examples [Browse Microsoft Graph Security integration templates](https://n8n.io/integrations/microsoft-graph-security/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft OneDrive node Use the Microsoft OneDrive node to automate work in Microsoft OneDrive, and integrate Microsoft OneDrive with other applications. n8n has built-in support for a wide range of Microsoft OneDrive features, including creating, updating, deleting, and getting files, and folders. On this page, you'll find a list of operations the Microsoft OneDrive node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - File - Copy a file - Delete a file - Download a file - Get a file - Rename a file - Search a file - Share a file - Upload a file up to 4MB in size - Folder - Create a folder - Delete a folder - Get Children (get items inside a folder) - Rename a folder - Search a folder - Share a folder ## Templates and examples **Hacker News to Video Content** by Alex Kim [View template details](https://n8n.io/workflows/2557-hacker-news-to-video-content/) **Working with Excel spreadsheet files (xls & xlsx)** by n8n Team [View template details](https://n8n.io/workflows/1826-working-with-excel-spreadsheet-files-xls-and-xlsx/) **📂 Automatically Update Stock Portfolio from OneDrive to Excel** by Louis [View template details](https://n8n.io/workflows/2507-automatically-update-stock-portfolio-from-onedrive-to-excel/) [Browse Microsoft OneDrive integration templates](https://n8n.io/integrations/microsoft-onedrive/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Microsoft's OneDrive API documentation](https://learn.microsoft.com/en-us/onedrive/developer/rest-api/) for more information about the service. ## Find the folder ID To perform operations on folders, you need to supply the ID. You can find this: - In the URL of the folder - By searching for it using the node. You need to do this if using MS 365 (where OneDrive uses SharePoint behind the scenes): 1. Select **Resource** > **Folder**. 1. Select **Operation** > **Search**. 1. In **Query**, enter the folder name. 1. Select **Execute step**. n8n runs the query and returns data about the folder, including an `id` field containing the folder ID. # Microsoft Outlook node Use the Microsoft Outlook node to automate work in Microsoft Outlook, and integrate Microsoft Outlook with other applications. n8n has built-in support for a wide range of Microsoft Outlook features, including creating, updating, deleting, and getting folders, messages, and drafts. On this page, you'll find a list of operations the Microsoft Outlook node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - Calendar - Create - Delete - Get - Get Many - Update - Contact - Create - Delete - Get - Get Many - Update - Draft - Create - Delete - Get - Send - Update - Event - Create - Delete - Get - Get Many - Update - Folder - Create - Delete - Get - Get Many - Update - Folder Message - Get Many - Message - Delete - Get - Get Many - Move - Reply - Send - Send and Wait for Response - Update - Message Attachment - Add - Download - Get - Get Many ## Waiting for a response By choosing the **Send and Wait for a Response** operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information. ### Response Type You can choose between the following types of waiting and approval actions: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types: - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. - **Append n8n Attribution**: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off). ### Approval response customization When using the Approval response type, you can choose whether to present only an approval button or both approval *and* disapproval buttons. You can also customize the button labels for the buttons you include. ### Free Text response customization When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label. ### Custom Form response customization When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. You'll also be able to customize the message button label, the form title and description, and the response button label. ## Templates and examples **Create a Branded AI-Powered Website Chatbot** by Wayne Simpson [View template details](https://n8n.io/workflows/2786-create-a-branded-ai-powered-website-chatbot/) **Auto Categorise Outlook Emails with AI** by Wayne Simpson [View template details](https://n8n.io/workflows/2454-auto-categorise-outlook-emails-with-ai/) **Phishing Analysis - URLScan.io and VirusTotal** by n8n Team [View template details](https://n8n.io/workflows/1992-phishing-analysis-urlscanio-and-virustotal/) [Browse Microsoft Outlook integration templates](https://n8n.io/integrations/microsoft-outlook/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Outlook's API documentation](https://learn.microsoft.com/en-us/outlook/rest/get-started) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft SharePoint node Use the Microsoft SharePoint node to automate work in Microsoft SharePoint and integrate Microsoft SharePoint with other applications. n8n has built-in support for a wide range of Microsoft SharePoint features, which includes downloading, uploading, and updating files, managing items in a list, and getting lists and list items. On this page, you'll find a list of operations the Microsoft SharePoint node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/microsoft/). ## Operations - **File**: - Download: Download a file. - Update: Update a file. - Upload: Upload an existing file. - **Item**: - Create: Create an item in an existing list. - Create or Update: Create a new item, or update the current one if it already exists (upsert). - Delete: Delete an item from a list. - Get: Retrieve an item from a list. - Get Many: Get specific items in a list or list many items. - Update: Update an item in an existing list. - **List**: - Get: Retrieve details of a single list. - Get Many: Retrieve a list of lists. ## Templates and examples **Upload File to SharePoint Using Microsoft Graph API** by Greg Evseev [View template details](https://n8n.io/workflows/3690-upload-file-to-sharepoint-using-microsoft-graph-api/) **Track Top Social Media Trends with Reddit, Twitter, and GPT-4o to SP/Drive** by plemeo [View template details](https://n8n.io/workflows/6272-track-top-social-media-trends-with-reddit-twitter-and-gpt-4o-to-spdrive/) **🛠️ Microsoft SharePoint Tool MCP Server 💪 all 11 operations** by David Ashby [View template details](https://n8n.io/workflows/5178-microsoft-sharepoint-tool-mcp-server-all-11-operations/) [Browse Microsoft SharePoint integration templates](https://n8n.io/integrations/microsoft-sharepoint/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Microsoft's SharePoint documentation](https://learn.microsoft.com/en-us/sharepoint/dev/sp-add-ins/get-to-know-the-sharepoint-rest-service) for more information about the service. # Microsoft SQL node Use the Microsoft SQL node to automate work in Microsoft SQL, and integrate Microsoft SQL with other applications. n8n has built-in support for a wide range of Microsoft SQL features, including executing SQL queries, and inserting rows into the database. On this page, you'll find a list of operations the Microsoft SQL node supports and links to more resources. Credentials Refer to [Microsoft SQL credentials](../../credentials/microsoftsql/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Execute an SQL query - Insert rows in database - Update rows in database - Delete rows in database ## Templates and examples **Generate Monthly Financial Reports with Gemini AI, SQL, and Outlook** by Amjid Ali [View template details](https://n8n.io/workflows/3617-generate-monthly-financial-reports-with-gemini-ai-sql-and-outlook/) **Execute an SQL query in Microsoft SQL** by tanaypant [View template details](https://n8n.io/workflows/479-execute-an-sql-query-in-microsoft-sql/) **Export SQL table into CSV file** by Eduard [View template details](https://n8n.io/workflows/1914-export-sql-table-into-csv-file/) [Browse Microsoft SQL integration templates](https://n8n.io/integrations/microsoft-sql/), or [search all templates](https://n8n.io/workflows/) # Microsoft Teams node Use the Microsoft Teams node to automate work in Microsoft Teams, and integrate Microsoft Teams with other applications. n8n has built-in support for a wide range of Microsoft Teams features, including creating and deleting, channels, messages, and tasks. On this page, you'll find a list of operations the Microsoft Teams node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - Channel - Create - Delete - Get - Get Many - Update - Channel Message - Create - Get Many - Chat Message - Create - Get - Get Many - Send and Wait for Response - Task - Create - Delete - Get - Get Many - Update ## Waiting for a response By choosing the **Send and Wait for a Response** operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information. ### Response Type You can choose between the following types of waiting and approval actions: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types: - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. - **Append n8n Attribution**: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off). ### Approval response customization When using the Approval response type, you can choose whether to present only an approval button or both approval *and* disapproval buttons. You can also customize the button labels for the buttons you include. ### Free Text response customization When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label. ### Custom Form response customization When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. You'll also be able to customize the message button label, the form title and description, and the response button label. ## Templates and examples **Create, update and send a message to a channel in Microsoft Teams** by amudhan [View template details](https://n8n.io/workflows/680-create-update-and-send-a-message-to-a-channel-in-microsoft-teams/) **Meraki Packet Loss and Latency Alerts to Microsoft Teams** by Gavin [View template details](https://n8n.io/workflows/2054-meraki-packet-loss-and-latency-alerts-to-microsoft-teams/) **Create Teams Notifications for new Tickets in ConnectWise with Redis** by Gavin [View template details](https://n8n.io/workflows/2352-create-teams-notifications-for-new-tickets-in-connectwise-with-redis/) [Browse Microsoft Teams integration templates](https://n8n.io/integrations/microsoft-teams/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Microsoft Teams' API documentation](https://learn.microsoft.com/en-us/graph/api/overview?view=graph-rest-1.0) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Microsoft To Do node Use the Microsoft To Do node to automate work in Microsoft To Do, and integrate Microsoft To Do with other applications. n8n has built-in support for a wide range of Microsoft To Do features, including creating, updating, deleting, and getting linked resources, lists, and tasks. On this page, you'll find a list of operations the Microsoft To Do node supports and links to more resources. Credentials Refer to [Microsoft credentials](../../credentials/microsoft/) for guidance on setting up authentication. Government Cloud Support If you're using a government cloud tenant (US Government, US Government DOD, or China), make sure to select the appropriate **Microsoft Graph API Base URL** in your Microsoft credentials configuration. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Linked Resource - Create - Delete - Get - Get All - Update - List - Create - Delete - Get - Get All - Update - Task - Create - Delete - Get - Get All - Update ## Templates and examples **📂 Automatically Update Stock Portfolio from OneDrive to Excel** by Louis [View template details](https://n8n.io/workflows/2507-automatically-update-stock-portfolio-from-onedrive-to-excel/) **Analyze Email Headers for IP Reputation and Spoofing Detection - Outlook** by Angel Menendez [View template details](https://n8n.io/workflows/2676-analyze-email-headers-for-ip-reputation-and-spoofing-detection-outlook/) **Create, update and get a task in Microsoft To Do** by Harshil Agrawal [View template details](https://n8n.io/workflows/1114-create-update-and-get-a-task-in-microsoft-to-do/) [Browse Microsoft To Do integration templates](https://n8n.io/integrations/microsoft-to-do/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mindee node Use the Mindee node to automate work in Mindee, and integrate Mindee with other applications. n8n has built-in support for a wide range of Mindee features, including predicting invoices. On this page, you'll find a list of operations the Mindee node supports and links to more resources. Credentials Refer to [Mindee credentials](../../credentials/mindee/) for guidance on setting up authentication. ## Operations - **Invoice** - Predict - **Receipt** - Predict ## Templates and examples **Extract expenses from emails and add to Google Sheets** by Jonathan [View template details](https://n8n.io/workflows/1466-extract-expenses-from-emails-and-add-to-google-sheets/) **Notify on new emails with invoices in Slack** by Jonathan [View template details](https://n8n.io/workflows/1467-notify-on-new-emails-with-invoices-in-slack/) **Extract information from an image of a receipt** by Harshil Agrawal [View template details](https://n8n.io/workflows/702-extract-information-from-an-image-of-a-receipt/) [Browse Mindee integration templates](https://n8n.io/integrations/mindee/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # MISP node Use the MISP node to automate work in MISP, and integrate MISP with other applications. n8n has built-in support for a wide range of MISP features, including creating, updating, deleting and getting events, feeds, and organizations. On this page, you'll find a list of operations the MISP node supports and links to more resources. Credentials Refer to [MISP credentials](../../credentials/misp/) for guidance on setting up authentication. ## Operations - Attribute - Create - Delete - Get - Get All - Search - Update - Event - Create - Delete - Get - Get All - Publish - Search - Unpublish - Update - Event Tag - Add - Remove - Feed - Create - Disable - Enable - Get - Get All - Update - Galaxy - Delete - Get - Get All - Noticelist - Get - Get All - Object - Search - Organisation - Create - Delete - Get - Get All - Update - Tag - Create - Delete - Get All - Update - User - Create - Delete - Get - Get All - Update - Warninglist - Get - Get All ## Templates and examples [Browse MISP integration templates](https://n8n.io/integrations/misp/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mistral AI node Use the Mistral AI node to automate work in Mistral AI and integrate Mistral AI with other applications. n8n has built-in support for extracting text with various models, file types, and input methods. On this page, you'll find a list of operations the Mistral AI node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/mistral/). ## Node parameters - **Resource**: The resource that Mistral AI should operate on. The current implementation supports the "Document" resource. - **Operation**: The operation to perform: - **Extract Text**: Extracts text from a document or image using optical character recognition (OCR). - **Model**: The model to use for the given operation. The current version requires the `mistral-ocr-latest` model. - **Document Type**: The document format to process. Can be "Document" or "Image". - **Input Type**: How to input the document: - **Binary Data**: Pass the document to this node as a binary field. - **URL**: Fetch the document from a given URL. - **Input Binary Field**: When using the "Binary Data" input type, defines the name of the input binary field containing the file. - **URL**: When using the "URL" input type, the URL of the document or image to process. ## Node options - **Enable Batch Processing**: Whether to process multiple documents in the same API call. This may reduce your costs by bundling requests. - **Batch Size**: When using "Enable Batch Processing", sets the maximum number of documents to process per batch. - **Delete Files After Processing**: When using "Enable Batch Processing", whether to delete the files from Mistral Cloud after processing. ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Build a PDF Document RAG System with Mistral OCR, Qdrant and Gemini AI** by Davide Boizza [View template details](https://n8n.io/workflows/4400-build-a-pdf-document-rag-system-with-mistral-ocr-qdrant-and-gemini-ai/) **Organise Your Local File Directories With AI** by Jimleuk [View template details](https://n8n.io/workflows/2334-organise-your-local-file-directories-with-ai/) [Browse Mistral AI integration templates](https://n8n.io/integrations/mistral-ai/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Mistral AI's documentation](https://docs.mistral.ai/api/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Mocean node Use the Mocean node to automate work in Mocean, and integrate Mocean with other applications. n8n has built-in support for a wide range of Mocean features, including sending SMS, and voice messages. On this page, you'll find a list of operations the Mocean node supports and links to more resources. Credentials Refer to [Mocean credentials](../../credentials/mocean/) for guidance on setting up authentication. ## Operations - SMS - Send SMS/Voice message - Voice - Send SMS/Voice message ## Templates and examples [Browse Mocean integration templates](https://n8n.io/integrations/mocean/), or [search all templates](https://n8n.io/workflows/) # monday.com node Use the monday.com node to automate work in monday.com, and integrate monday.com with other applications. n8n has built-in support for a wide range of monday.com features, including creating a new board, and adding, deleting, and getting items on the board. On this page, you'll find a list of operations the monday.com node supports and links to more resources. Minimum required version This node requires n8n version 1.22.6 or above. Credentials Refer to [monday.com credentials](../../credentials/mondaycom/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Board - Archive a board - Create a new board - Get a board - Get all boards - Board Column - Create a new column - Get all columns - Board Group - Delete a group in a board - Create a group in a board - Get list of groups in a board - Board Item - Add an update to an item. - Change a column value for a board item - Change multiple column values for a board item - Create an item in a board's group - Delete an item - Get an item - Get all items - Get items by column value - Move item to group ## Templates and examples **Create ticket on specific customer messages in Telegram** by tanaypant [View template details](https://n8n.io/workflows/368-create-ticket-on-specific-customer-messages-in-telegram/) **Microsoft Outlook AI Email Assistant with contact support from Monday and Airtable** by Cognitive Creators [View template details](https://n8n.io/workflows/2809-microsoft-outlook-ai-email-assistant-with-contact-support-from-monday-and-airtable/) **Retrieve a Monday.com row and all data in a single node** by Joey D’Anna [View template details](https://n8n.io/workflows/2086-retrieve-a-mondaycom-row-and-all-data-in-a-single-node/) [Browse monday.com integration templates](https://n8n.io/integrations/mondaycom/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # MongoDB node Use the MongoDB node to automate work in MongoDB, and integrate MongoDB with other applications. n8n has built-in support for a wide range of MongoDB features, including aggregating, updating, finding, deleting, and getting documents as well as creating, updating, listing, and dropping search indexes. All operations in this Node make use of the [MongoDB Node driver](https://www.mongodb.com/docs/drivers/node/current/). On this page, you'll find a list of operations the MongoDB node supports and links to more resources. Credentials Refer to [MongoDB credentials](../../credentials/mongodb/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Document - Aggregate documents - Delete documents - Find documents - Find and replace documents - Find and update documents - Insert documents - Update documents - Search Index - Create search indexes - Drop search indexes - List search indexes - Update search indexes ## Templates and examples **Scrape and store data from multiple website pages** by Miquel Colomer [View template details](https://n8n.io/workflows/1073-scrape-and-store-data-from-multiple-website-pages/) **AI-Powered WhatsApp Chatbot for Text, Voice, Images, and PDF with RAG** by NovaNode [View template details](https://n8n.io/workflows/4827-ai-powered-whatsapp-chatbot-for-text-voice-images-and-pdf-with-rag/) **Content farming - : AI-powered blog automation for WordPress** by Jay Emp0 [View template details](https://n8n.io/workflows/5230-content-farming-ai-powered-blog-automation-for-wordpress/) [Browse MongoDB integration templates](https://n8n.io/integrations/mongodb/), or [search all templates](https://n8n.io/workflows/) # Monica CRM node Use the Monica CRM node to automate work in Monica CRM, and integrate Monica CRM with other applications. n8n has built-in support for a wide range of Monica CRM features, including creating, updating, deleting, and getting activities, calls, contracts, messages, tasks, and notes. On this page, you'll find a list of operations the Monica CRM node supports and links to more resources. Credentials Refer to [Monica CRM credentials](../../credentials/monicacrm/) for guidance on setting up authentication. ## Operations - Activity - Create an activity - Delete an activity - Retrieve an activity - Retrieve all activities - Update an activity - Call - Create a call - Delete a call - Retrieve a call - Retrieve all calls - Update a call - Contact - Create a contact - Delete a contact - Retrieve a contact - Retrieve all contacts - Update a contact - Contact Field - Create a contact field - Delete a contact field - Retrieve a contact field - Update a contact field - Contact Tag - Add - Remove - Conversation - Create a conversation - Delete a conversation - Retrieve a conversation - Update a conversation - Conversation Message - Add a message to a conversation - Update a message in a conversation - Journal Entry - Create a journal entry - Delete a journal entry - Retrieve a journal entry - Retrieve all journal entries - Update a journal entry - Note - Create a note - Delete a note - Retrieve a note - Retrieve all notes - Update a note - Reminder - Create a reminder - Delete a reminder - Retrieve a reminder - Retrieve all reminders - Update a reminder - Tag - Create a tag - Delete a tag - Retrieve a tag - Retrieve all tags - Update a tag - Task - Create a task - Delete a task - Retrieve a task - Retrieve all tasks - Update a task ## Templates and examples [Browse Monica CRM integration templates](https://n8n.io/integrations/monica-crm/), or [search all templates](https://n8n.io/workflows/) # MQTT node Use the MQTT node to automate work in MQTT, and integrate MQTT with other applications. n8n supports transporting messages with MQTT. On this page, you'll find a list of operations the MQTT node supports and links to more resources. Credentials Refer to [MQTT credentials](../../credentials/mqtt/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations Use the MQTT node to send a message. You can set the message topic, and choose whether to send the node input data as part of the message. ## Templates and examples **IOT Button Remote / Spotify Control Integration with MQTT** by Hubschrauber [View template details](https://n8n.io/workflows/2383-iot-button-remote-spotify-control-integration-with-mqtt/) **Receive messages for a MQTT queue** by Harshil Agrawal [View template details](https://n8n.io/workflows/657-receive-messages-for-a-mqtt-queue/) **Send location updates of the ISS to a topic in MQTT** by Harshil Agrawal [View template details](https://n8n.io/workflows/1069-send-location-updates-of-the-iss-to-a-topic-in-mqtt/) [Browse MQTT integration templates](https://n8n.io/integrations/mqtt/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for MQTT. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.mqtttrigger/). Refer to [MQTT's documentation](https://mqtt.org/getting-started/) for more information about the service. # MSG91 node Use the MSG91 node to automate work in MSG91, and integrate MSG91 with other applications. n8n supports sending SMS with MSG91. On this page, you'll find a list of operations the MSG91 node supports and links to more resources. Credentials Refer to [MSG91 credentials](../../credentials/msg91/) for guidance on setting up authentication. ## Operations - SMS - Send SMS ## Templates and examples [Browse MSG91 integration templates](https://n8n.io/integrations/msg91/), or [search all templates](https://n8n.io/workflows/) ## Find your Sender ID 1. Log in to your MSG91 dashboard. 1. Select **Sender Id** in the left panel. 1. If you don't already have one, select **Add Sender Id +**, fill in the details, and select **Save Sender Id**. # Customer Datastore (n8n Training) node Use this node only for the n8n new user onboarding tutorial. It provides dummy data for testing purposes and has no further functionality. # Customer Messenger (n8n Training) node Use this node only for the n8n new user onboarding tutorial. It provides no further functionality. # NASA node Use the NASA node to automate work in NASA, and integrate NASA with other applications. n8n has built-in support for a wide range of NASA features, including retrieving imagery and data. On this page, you'll find a list of operations the NASA node supports and links to more resources. Credentials Refer to [NASA credentials](../../credentials/nasa/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Astronomy Picture of the Day - Get the Astronomy Picture of the Day - Asteroid Neo-Feed - Retrieve a list of asteroids based on their closest approach date to Earth - Asteroid Neo-Lookup - Look up an asteroid based on its NASA SPK-ID - Asteroid Neo-Browse - Browse the overall asteroid dataset - DONKI Coronal Mass Ejection - Retrieve DONKI coronal mass ejection data - DONKI Interplanetary Shock - Retrieve DONKI interplanetary shock data - DONKI Solar Flare - Retrieve DONKI solar flare data - DONKI Solar Energetic Particle - Retrieve DONKI solar energetic particle data - DONKI Magnetopause Crossing - Retrieve data on DONKI magnetopause crossings - DONKI Radiation Belt Enhancement - Retrieve DONKI radiation belt enhancement data - DONKI High Speed Stream - Retrieve DONKI high speed stream data - DONKI WSA+EnlilSimulation - Retrieve DONKI WSA+EnlilSimulation data - DONKI Notifications - Retrieve DONKI notifications data - Earth Imagery - Retrieve Earth imagery - Earth Assets - Retrieve Earth assets ## Templates and examples **Set credentials dynamically using expressions** by Deborah [View template details](https://n8n.io/workflows/2223-set-credentials-dynamically-using-expressions/) **Send the astronomy picture of the day daily to a Telegram channel** by Harshil Agrawal [View template details](https://n8n.io/workflows/828-send-the-astronomy-picture-of-the-day-daily-to-a-telegram-channel/) **Retrieve NASA space weather & asteroid data with GPT-4o-mini and Telegram** by Ghufran Ridhawi [View template details](https://n8n.io/workflows/3834-retrieve-nasa-space-weather-and-asteroid-data-with-gpt-4o-mini-and-telegram/) [Browse NASA integration templates](https://n8n.io/integrations/nasa/), or [search all templates](https://n8n.io/workflows/) # Netlify node Use the Netlify node to automate work in Netlify, and integrate Netlify with other applications. n8n has built-in support for a wide range of Netlify features, including getting and cancelling deployments, as well as deleting, and getting sites. On this page, you'll find a list of operations the Netlify node supports and links to more resources. Credentials Refer to [Netlify credentials](../../credentials/netlify/) for guidance on setting up authentication. ## Operations - Deploy - Cancel a deployment - Create a new deployment - Get a deployment - Get all deployments - Site - Delete a site - Get a site - Returns all sites ## Templates and examples **Deploy site when new content gets added** by Harshil Agrawal [View template details](https://n8n.io/workflows/1254-deploy-site-when-new-content-gets-added/) **Send notification when deployment fails** by Harshil Agrawal [View template details](https://n8n.io/workflows/1255-send-notification-when-deployment-fails/) **Add Netlify Form submissions to Airtable** by Harshil Agrawal [View template details](https://n8n.io/workflows/1253-add-netlify-form-submissions-to-airtable/) [Browse Netlify integration templates](https://n8n.io/integrations/netlify/), or [search all templates](https://n8n.io/workflows/) # Netscaler ADC node Use the Netscaler ADC node to automate work in Netscaler ADC, and integrate Netscaler ADC with other applications. n8n has built-in support for a wide range of Netscaler ADC features, including creating and installing certificates and files. On this page, you'll find a list of operations the Netscaler ADC node supports and links to more resources. Credentials Refer to [Netscaler ADC credentials](../../credentials/netscaleradc/) for guidance on setting up authentication. ## Operations - Certificate - Create - Install - File - Delete - Download - Upload ## Templates and examples [Browse Netscaler ADC integration templates](https://n8n.io/integrations/netscaler-adc/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Netscaler ADC's documentation](https://docs.citrix.com/en-us/citrix-adc/current-release/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Nextcloud node Use the Nextcloud node to automate work in Nextcloud, and integrate Nextcloud with other applications. n8n has built-in support for a wide range of Nextcloud features, including creating, updating, deleting, and getting files, and folders as well as retrieving, and inviting users. On this page, you'll find a list of operations the Nextcloud node supports and links to more resources. Credentials Refer to [Nextcloud credentials](../../credentials/nextcloud/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - File - Copy a file - Delete a file - Download a file - Move a file - Share a file - Upload a file - Folder - Copy a folder - Create a folder - Delete a folder - Return the contents of a given folder - Move a folder - Share a folder - User - Invite a user to a Nextcloud organization - Delete a user. - Retrieve information about a single user. - Retrieve a list of users. - Edit attributes related to a user. ## Templates and examples **Save email attachments to Nextcloud** by Manu [View template details](https://n8n.io/workflows/1344-save-email-attachments-to-nextcloud/) **Backs up n8n Workflows to NextCloud** by dave [View template details](https://n8n.io/workflows/175-backs-up-n8n-workflows-to-nextcloud/) **Move a nextcloud folder file by file** by Nico Kowalczyk [View template details](https://n8n.io/workflows/1994-move-a-nextcloud-folder-file-by-file/) [Browse Nextcloud integration templates](https://n8n.io/integrations/nextcloud/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # NocoDB node Use the NocoDB node to automate work in NocoDB, and integrate NocoDB with other applications. n8n has built-in support for a wide range of NocoDB features, including creating, updating, deleting, and retrieving rows. On this page, you'll find a list of operations the NocoDB node supports and links to more resources. Credentials Refer to [NocoDB credentials](../../credentials/nocodb/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Row - Create - Delete - Get - Get Many - Update a row ## Templates and examples **Scrape and summarize posts of a news site without RSS feed using AI and save them to a NocoDB** by Askan [View template details](https://n8n.io/workflows/2180-scrape-and-summarize-posts-of-a-news-site-without-rss-feed-using-ai-and-save-them-to-a-nocodb/) **Multilanguage Telegram bot** by Eduard [View template details](https://n8n.io/workflows/1583-multilanguage-telegram-bot/) **Create LinkedIn Contributions with AI and Notify Users On Slack** by Darryn Balanco [View template details](https://n8n.io/workflows/2491-create-linkedin-contributions-with-ai-and-notify-users-on-slack/) [Browse NocoDB integration templates](https://n8n.io/integrations/nocodb/), or [search all templates](https://n8n.io/workflows/) ## Relates resources Refer to [NocoDB's documentation](https://docs.nocodb.com/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # npm node Use the npm node to automate work in npm, and integrate npm with other applications. On this page, you'll find a list of operations the npm node supports and links to more resources. Credentials Refer to [npm credentials](../../credentials/npm/) for guidance on setting up authentication. ## Operations - Package - Get Package Metadata - Get Package Versions - Search for Packages - Distribution Tag - Get All Tags - Update a Tag ## Templates and examples **Automate NPM Package Installation and Updates for Self-Hosted Environments** by Joachim Brindeau [View template details](https://n8n.io/workflows/3293-automate-npm-package-installation-and-updates-for-self-hosted-environments/) **🛠️ Npm Tool MCP Server 💪 all 5 operations** by David Ashby [View template details](https://n8n.io/workflows/5341-npm-tool-mcp-server-all-5-operations/) **AI-Powered NPM Package Intelligence Agent** by Divyanshu Gupta [View template details](https://n8n.io/workflows/15410-ai-powered-npm-package-intelligence-agent/) [Browse npm integration templates](https://n8n.io/integrations/npm/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [npm's documentation](https://docs.npmjs.com/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Odoo node Use the Odoo node to automate work in Odoo, and integrate Odoo with other applications. n8n has built-in support for a wide range of Odoo features, including creating, updating, deleting, and getting contracts, resources, and opportunities. On this page, you'll find a list of operations the Odoo node supports and links to more resources. Credentials Refer to [Odoo credentials](../../credentials/odoo/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Contact - Create a new contact - Delete a contact - Get a contact - Get all contacts - Update a contact - Custom Resource - Create a new item - Delete an item - Get an item - Get all items - Update an item - Note - Create a new note - Delete a note - Get a note - Get all notes - Update a note - Opportunity - Create a new opportunity - Delete an opportunity - Get an opportunity - Get all opportunities - Update an opportunity ## Templates and examples **ERP AI chatbot for Odoo sales module with OpenAI** by Mihai Farcas [View template details](https://n8n.io/workflows/2325-erp-ai-chatbot-for-odoo-sales-module-with-openai/) **Summarize emails and save them as notes on sales opportunity in Odoo** by Mihai Farcas [View template details](https://n8n.io/workflows/2336-summarize-emails-and-save-them-as-notes-on-sales-opportunity-in-odoo/) **Import Odoo Product Images from Google Drive** by AArtIntelligent [View template details](https://n8n.io/workflows/3181-import-odoo-product-images-from-google-drive/) [Browse Odoo integration templates](https://n8n.io/integrations/odoo/), or [search all templates](https://n8n.io/workflows/) # Okta node Use the Okta node to automate work in Okta and integrate Okta with other applications. n8n has built-in support for a wide range of Okta features, which includes creating, updating, and deleting users. On this page, you'll find a list of operations the Okta node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/okta/). ## Operations - User - Create a new user - Delete an existing user - Get details of a user - Get many users - Update an existing user ## Templates and examples \[Browse Okta integration templates\](https://n8n.io/integrations/{{ okta }}/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Okta's documentation](https://developer.okta.com/docs/guides/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # One Simple API node Use the One Simple API node to automate work in One Simple API, and integrate One Simple API with other applications. n8n has built-in support for a wide range of One Simple API features, including getting profiles, retrieving information, and generating utilities. On this page, you'll find a list of operations the One Simple API node supports and links to more resources. Credentials Refer to [One Simple API credentials](../../credentials/onesimpleapi/) for guidance on setting up authentication. ## Operations - Information - Convert a value between currencies - Retrieve image metadata from a URL - Social Profile - Get details about an Instagram profile - Get details about a Spotify Artist - Utility - Expand a shortened url - Generate a QR Code - Validate an email address - Website - Generate a PDF from a webpage - Get SEO information from website - Create a screenshot from a webpage ## Templates and examples **Validate email of new contacts in Mautic** by Jonathan [View template details](https://n8n.io/workflows/1462-validate-email-of-new-contacts-in-mautic/) **Validate email of new contacts in Hubspot** by Jonathan [View template details](https://n8n.io/workflows/1461-validate-email-of-new-contacts-in-hubspot/) **🛠️ One Simple API Tool MCP Server 💪 all 10 operations** by David Ashby [View template details](https://n8n.io/workflows/5344-one-simple-api-tool-mcp-server-all-10-operations/) [Browse One Simple API integration templates](https://n8n.io/integrations/one-simple-api/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [One Simple API's documentation](https://onesimpleapi.com/docs) for more information about the service. # Onfleet node Use the Onfleet node to automate work in Onfleet, and integrate Onfleet with other applications. n8n has built-in support for a wide range of Onfleet features, including creating and deleting tasks in Onfleet as well as retrieving organizations' details. On this page, you'll find a list of operations the Onfleet node supports and links to more resources. Credentials Refer to [Onfleet credentials](../../credentials/onfleet/) for guidance on setting up authentication. ## Operations - Admin - Create a new Onfleet admin - Delete an Onfleet admin - Get all Onfleet admins - Update an Onfleet admin - Container - Add task at index (or append) - Get container information - Fully replace a container's tasks - Destination - Create a new destination - Get a specific destination - Hub - Create a new Onfleet hub - Get all Onfleet hubs - Update an Onfleet hub - Organization - Retrieve your own organization's details - Retrieve the details of an organization with which you are connected - Recipient - Create a new Onfleet recipient - Get a specific Onfleet recipient - Update an Onfleet recipient - Task - Create a new Onfleet task - Clone an Onfleet task - Force-complete a started Onfleet task - Delete an Onfleet task - Get all Onfleet tasks - Get a specific Onfleet task - Update an Onfleet task - Team - Automatically dispatch tasks assigned to a team to on-duty drivers - Create a new Onfleet team - Delete an Onfleet team - Get a specific Onfleet team - Get all Onfleet teams - Get estimated times for upcoming tasks for a team, returns a selected driver - Update an Onfleet team - Worker - Create a new Onfleet worker - Delete an Onfleet worker - Get a specific Onfleet worker - Get all Onfleet workers - Get a specific Onfleet worker schedule - Update an Onfleet worker ## Templates and examples **Send a Whatsapp message via Twilio when a certain Onfleet event happens** by James Li [View template details](https://n8n.io/workflows/1525-send-a-whatsapp-message-via-twilio-when-a-certain-onfleet-event-happens/) **Create a QuickBooks invoice on a new Onfleet Task creation** by James Li [View template details](https://n8n.io/workflows/1546-create-a-quickbooks-invoice-on-a-new-onfleet-task-creation/) **Send a Discord message when a certain Onfleet event happens** by James Li [View template details](https://n8n.io/workflows/1528-send-a-discord-message-when-a-certain-onfleet-event-happens/) [Browse Onfleet integration templates](https://n8n.io/integrations/onfleet/), or [search all templates](https://n8n.io/workflows/) # OpenThesaurus node Use the OpenThesaurus node to automate work in OpenThesaurus, and integrate OpenThesaurus with other applications. n8n supports synonym look-up for German words. On this page, you'll find a list of operations the OpenThesaurus node supports and links to more resources. Credentials OpenThesaurus node doesn't require authentication. ## Operations - Get synonyms for a German word in German ## Templates and examples [Browse OpenThesaurus integration templates](https://n8n.io/integrations/openthesaurus/), or [search all templates](https://n8n.io/workflows/) # OpenWeatherMap node Use the OpenWeatherMap node to automate work in OpenWeatherMap, and integrate OpenWeatherMap with other applications. n8n supports retrieving current and upcoming weather data with OpenWeatherMap. On this page, you'll find a list of operations the OpenWeatherMap node supports and links to more resources. Credentials Refer to [OpenWeatherMap credentials](../../credentials/openweathermap/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Returns the current weather data - Returns the weather data for the next 5 days ## Templates and examples **Get Weather Forecast via Telegram** by tanaypant [View template details](https://n8n.io/workflows/346-get-weather-forecast-via-telegram/) **Get information about the weather for any city** by amudhan [View template details](https://n8n.io/workflows/737-get-information-about-the-weather-for-any-city/) **Receive the weather information of any city** by Harshil Agrawal [View template details](https://n8n.io/workflows/807-receive-the-weather-information-of-any-city/) [Browse OpenWeatherMap integration templates](https://n8n.io/integrations/openweathermap/), or [search all templates](https://n8n.io/workflows/) # Oura node Use the Oura node to automate work in Oura, and integrate Oura with other applications. n8n has built-in support for a wide range of Oura features, including getting profiles, and summaries. On this page, you'll find a list of operations the Oura node supports and links to more resources. Credentials Refer to [Oura credentials](../../credentials/oura/) for guidance on setting up authentication. ## Operations - Profile - Get the user's personal information. - Summary - Get the user's activity summary. - Get the user's readiness summary. - Get the user's sleep summary ## Templates and examples [Browse Oura integration templates](https://n8n.io/integrations/oura/), or [search all templates](https://n8n.io/workflows/) # Paddle node Use the Paddle node to automate work in Paddle, and integrate Paddle with other applications. n8n has built-in support for a wide range of Paddle features, including creating, updating, and getting coupons, as well as getting plans, products, and users. On this page, you'll find a list of operations the Paddle node supports and links to more resources. Credentials Refer to [Paddle credentials](../../credentials/paddle/) for guidance on setting up authentication. ## Operations - Coupon - Create a coupon. - Get all coupons. - Update a coupon. - Payment - Get all payment. - Reschedule payment. - Plan - Get a plan. - Get all plans. - Product - Get all products. - User - Get all users ## Templates and examples [Browse Paddle integration templates](https://n8n.io/integrations/paddle/), or [search all templates](https://n8n.io/workflows/) # PagerDuty node Use the PagerDuty node to automate work in PagerDuty, and integrate PagerDuty with other applications. n8n has built-in support for a wide range of PagerDuty features, including creating incident notes, as well as updating, and getting all log entries and users. On this page, you'll find a list of operations the PagerDuty node supports and links to more resources. Credentials Refer to [PagerDuty credentials](../../credentials/pagerduty/) for guidance on setting up authentication. ## Operations - Incident - Create an incident - Get an incident - Get all incidents - Update an incident - Incident Note - Create an incident note - Get all incident's notes - Log Entry - Get a log entry - Get all log entries - User - Get a user ## Templates and examples **Manage custom incident response in PagerDuty and Jira** by tanaypant [View template details](https://n8n.io/workflows/353-manage-custom-incident-response-in-pagerduty-and-jira/) **Incident Response Workflow - Part 3** by tanaypant [View template details](https://n8n.io/workflows/355-incident-response-workflow-part-3/) **Incident Response Workflow - Part 2** by tanaypant [View template details](https://n8n.io/workflows/354-incident-response-workflow-part-2/) [Browse PagerDuty integration templates](https://n8n.io/integrations/pagerduty/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # PayPal node Use the PayPal node to automate work in PayPal, and integrate PayPal with other applications. n8n has built-in support for a wide range of PayPal features, including creating a batch payout and canceling unclaimed payout items. On this page, you'll find a list of operations the PayPal node supports and links to more resources. Credentials Refer to [PayPal credentials](../../credentials/paypal/) for guidance on setting up authentication. ## Operations - Payout - Create a batch payout - Show batch payout details - Payout Item - Cancels an unclaimed payout item - Show payout item details ## Templates and examples **Create a PayPal batch payout** by ivov [View template details](https://n8n.io/workflows/438-create-a-paypal-batch-payout/) **Receive updates when a billing plan is activated in PayPal** by Harshil Agrawal [View template details](https://n8n.io/workflows/653-receive-updates-when-a-billing-plan-is-activated-in-paypal/) **Automate Digital Delivery After PayPal Purchase Using n8n** by Amjid Ali [View template details](https://n8n.io/workflows/3697-automate-digital-delivery-after-paypal-purchase-using-n8n/) [Browse PayPal integration templates](https://n8n.io/integrations/paypal/), or [search all templates](https://n8n.io/workflows/) # Peekalink node Use the Peekalink node to automate work in Peekalink, and integrate Peekalink with other applications. n8n supports checking, and reviewing links with Peekalink. On this page, you'll find a list of operations the Peekalink node supports and links to more resources. Credentials Refer to [Peekalink credentials](../../credentials/peekalink/) for guidance on setting up authentication. ## Operations - Check whether preview for a given link is available - Return the preview for a link ## Templates and examples [Browse Peekalink integration templates](https://n8n.io/integrations/peekalink/), or [search all templates](https://n8n.io/workflows/) # PhantomBuster node Use the PhantomBuster node to automate work in PhantomBuster, and integrate PhantomBuster with other applications. n8n has built-in support for a wide range of PhantomBuster features, including adding, deleting, and getting agents. On this page, you'll find a list of operations the PhantomBuster node supports and links to more resources. Credentials Refer to [PhantomBuster credentials](../../credentials/phantombuster/) for guidance on setting up authentication. ## Operations - Agent - Delete an agent by ID. - Get an agent by ID. - Get all agents of the current user's organization. - Get the output of the most recent container of an agent. - Add an agent to the launch queue. ## Templates and examples **Create HubSpot contacts from LinkedIn post interactions** by Pauline [View template details](https://n8n.io/workflows/1323-create-hubspot-contacts-from-linkedin-post-interactions/) **Store the output of a phantom in Airtable** by Harshil Agrawal [View template details](https://n8n.io/workflows/882-store-the-output-of-a-phantom-in-airtable/) **Personalized LinkedIn Connection Requests with Apollo, GPT-4, Apify & PhantomBuster** by Nick Saraev [View template details](https://n8n.io/workflows/4803-personalized-linkedin-connection-requests-with-apollo-gpt-4-apify-and-phantombuster/) [Browse PhantomBuster integration templates](https://n8n.io/integrations/phantombuster/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Philips Hue node Use the Philips Hue node to automate work in Philips Hue, and integrate Philips Hue with other applications. n8n has built-in support for a wide range of Philips Hue features, including deleting, retrieving, and updating lights. On this page, you'll find a list of operations the Philips Hue node supports and links to more resources. Credentials Refer to [Philips Hue credentials](../../credentials/philipshue/) for guidance on setting up authentication. ## Operations - Light - Delete a light - Retrieve a light - Retrieve all lights - Update a light ## Templates and examples **Turn on a light and set its brightness** by Harshil Agrawal [View template details](https://n8n.io/workflows/666-turn-on-a-light-and-set-its-brightness/) **Google Calendar to Slack Status and Philips Hue** by TheUnknownEntity [View template details](https://n8n.io/workflows/1420-google-calendar-to-slack-status-and-philips-hue/) **🛠️ Philips Hue Tool MCP Server 💪 all 4 operations** by David Ashby [View template details](https://n8n.io/workflows/5095-philips-hue-tool-mcp-server-all-4-operations/) [Browse Philips Hue integration templates](https://n8n.io/integrations/philips-hue/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Pipedrive node Use the Pipedrive node to automate work in Pipedrive, and integrate Pipedrive with other applications. n8n has built-in support for a wide range of Pipedrive features, including creating, updating, deleting, and getting activity, files, notes, organizations, and leads. On this page, you'll find a list of operations the Pipedrive node supports and links to more resources. Credentials Refer to [Pipedrive credentials](../../credentials/pipedrive/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Activity - Create an activity - Delete an activity - Get data of an activity - Get data of all activities - Update an activity - Deal - Create a deal - Delete a deal - Duplicate a deal - Get data of a deal - Get data of all deals - Search a deal - Update a deal - Deal Activity - Get all activities of a deal - Deal Product - Add a product to a deal - Get all products in a deal - Remove a product from a deal - Update a product in a deal - File - Create a file - Delete a file - Download a file - Get data of a file - Lead - Create a lead - Delete a lead - Get data of a lead - Get data of all leads - Update a lead - Note - Create a note - Delete a note - Get data of a note - Get data of all notes - Update a note - Organization - Create an organization - Delete an organization - Get data of an organization - Get data of all organizations - Update an organization - Search organizations - Person - Create a person - Delete a person - Get data of a person - Get data of all persons - Search all persons - Update a person - Product - Get data of all products ## Templates and examples **Two way sync Pipedrive and MySQL** by n8n Team [View template details](https://n8n.io/workflows/1822-two-way-sync-pipedrive-and-mysql/) **Upload leads from a CSV file to Pipedrive CRM** by n8n Team [View template details](https://n8n.io/workflows/1787-upload-leads-from-a-csv-file-to-pipedrive-crm/) **Enrich new leads in Pipedrive and send an alert to Slack for high-quality ones** by Niklas Hatje [View template details](https://n8n.io/workflows/2135-enrich-new-leads-in-pipedrive-and-send-an-alert-to-slack-for-high-quality-ones/) [Browse Pipedrive integration templates](https://n8n.io/integrations/pipedrive/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Plivo node Use the Plivo node to automate work in Plivo, and integrate Plivo with other applications. n8n has built-in support for a wide range of Plivo features, including making calls, and sending SMS/MMS. On this page, you'll find a list of operations the Plivo node supports and links to more resources. Credentials Refer to [Plivo credentials](../../credentials/plivo/) for guidance on setting up authentication. ## Operations - Call - Make a voice call - MMS - Send an MMS message (US/Canada only) - SMS - Send an SMS message. ## Templates and examples **Send daily weather updates to a phone number via Plivo** by Harshil Agrawal [View template details](https://n8n.io/workflows/1005-send-daily-weather-updates-to-a-phone-number-via-plivo/) **Create and Join Call Sessions with Plivo and UltraVox AI Voice Assistant** by Yohita [View template details](https://n8n.io/workflows/3392-create-and-join-call-sessions-with-plivo-and-ultravox-ai-voice-assistant/) **🛠️ Plivo Tool MCP Server 💪 all 3 operations** by David Ashby [View template details](https://n8n.io/workflows/5094-plivo-tool-mcp-server-all-3-operations/) [Browse Plivo integration templates](https://n8n.io/integrations/plivo/), or [search all templates](https://n8n.io/workflows/) # PostBin node PostBin is a service that helps you test API clients and webhooks. Use the PostBin node to automate work in PostBin, and integrate PostBin with other applications. n8n has built-in support for a wide range of PostBin features, including creating and deleting bins, and getting and sending requests. On this page, you'll find a list of operations the PostBin node supports, and links to more resources. ## Operations - Bin - Create - Get - Delete - Request - Get - Remove First - Send ## Templates and examples [Browse PostBin integration templates](https://n8n.io/integrations/postbin/), or [search all templates](https://n8n.io/workflows/) ## Send requests To send requests to a PostBin bin: 1. Go to [PostBin](https://www.toptal.com/developers/postbin/) and follow the steps to generate a new bin. PostBin gives you a unique URL, including a bin ID. 1. In the PostBin node, select the **Request** resource. 1. Choose the type of **Operation** you want to perform. 1. Enter your bin ID in **Bin ID**. ## Create and manage bins You can create and manage PostBin bins using the PostBin node. 1. In **Resource**, select **Bin**. 1. Choose an **Operation**. You can create, delete, or get a bin. # PostHog node Use the PostHog node to automate work in PostHog, and integrate PostHog with other applications. n8n has built-in support for a wide range of PostHog features, including creating aliases, events, and identity, as well as tracking pages. On this page, you'll find a list of operations the PostHog node supports and links to more resources. Credentials Refer to [PostHog credentials](../../credentials/posthog/) for guidance on setting up authentication. ## Operations - Alias - Create an alias - Event - Create an event - Identity - Create - Track - Track a page - Track a screen ## Templates and examples [Browse PostHog integration templates](https://n8n.io/integrations/posthog/), or [search all templates](https://n8n.io/workflows/) # ProfitWell node Use the ProfitWell node to automate work in ProfitWell, and integrate ProfitWell with other applications. n8n supports getting your company's account settings and retrieving financial metrics from ProfitWell. On this page, you'll find a list of operations the ProfitWell node supports and links to more resources. Credentials Refer to [ProfitWell credentials](../../credentials/profitwell/) for guidance on setting up authentication. ## Operations - Company - Get your company's ProfitWell account settings - Metric - Retrieve financial metric broken down by day for either the current month or the last ## Templates and examples [Browse ProfitWell integration templates](https://n8n.io/integrations/profitwell/), or [search all templates](https://n8n.io/workflows/) # Pushbullet node Use the Pushbullet node to automate work in Pushbullet, and integrate Pushbullet with other applications. n8n has built-in support for a wide range of Pushbullet features, including creating, updating, deleting, and getting a push. On this page, you'll find a list of operations the Pushbullet node supports and links to more resources. Credentials Refer to [Pushbullet credentials](../../credentials/pushbullet/) for guidance on setting up authentication. ## Operations - Push - Create a push - Delete a push - Get all pushes - Update a push ## Templates and examples [Browse Pushbullet integration templates](https://n8n.io/integrations/pushbullet/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Pushcut node Use the Pushcut node to automate work in Pushcut, and integrate Pushcut with other applications. n8n supports sending notifications with Pushcut. On this page, you'll find a list of operations the Pushcut node supports and links to more resources. Credentials Refer to [Pushcut credentials](../../credentials/pushcut/) for guidance on setting up authentication. ## Operations - Notification - Send a notification ## Templates and examples [Browse Pushcut integration templates](https://n8n.io/integrations/pushcut/), or [search all templates](https://n8n.io/workflows/) # Pushover node Use the Pushover node to automate work in Pushover, and integrate Pushover with other applications. n8n supports sending push notifications with Pushover. On this page, you'll find a list of operations the Pushover node supports and links to more resources. Credentials Refer to [Pushover credentials](../../credentials/pushover/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Message - Push ## Templates and examples **Weekly reminder on your notion tasks with a deadline** by David [View template details](https://n8n.io/workflows/2409-weekly-reminder-on-your-notion-tasks-with-a-deadline/) **Send daily weather updates via push notification** by Harshil Agrawal [View template details](https://n8n.io/workflows/740-send-daily-weather-updates-via-push-notification/) **Error Handling System with PostgreSQL Logging and Rate-Limited Notifications** by Davi Saranszky Mesquita [View template details](https://n8n.io/workflows/3882-error-handling-system-with-postgresql-logging-and-rate-limited-notifications/) [Browse Pushover integration templates](https://n8n.io/integrations/pushover/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # QuestDB node Use the QuestDB node to automate work in QuestDB, and integrate QuestDB with other applications. n8n supports executing an SQL query and inserting rows in a database with QuestDB. On this page, you'll find a list of operations the QuestDB node supports and links to more resources. Credentials Refer to [QuestDB credentials](../../credentials/questdb/) for guidance on setting up authentication. ## Operations - Executes a SQL query. - Insert rows in database. ## Templates and examples [Browse QuestDB integration templates](https://n8n.io/integrations/questdb/), or [search all templates](https://n8n.io/workflows/) ## Node reference ### Specify a column's data type To specify a column's data type, append the column name with `:type`, where `type` is the data type you want for column. For example, if you want to specify the type `int` for the column **id** and type `text` for the column **name**, you can use the following snippet in the **Columns** field: `id:int,name:text`. # Quick Base node Use the Quick Base node to automate work in Quick Base, and integrate Quick Base with other applications. n8n has built-in support for a wide range of Quick Base features, including creating, updating, deleting, and getting records, as well as getting fields, and downloading files. On this page, you'll find a list of operations the Quick Base node supports and links to more resources. Credentials Refer to [Quick Base credentials](../../credentials/quickbase/) for guidance on setting up authentication. ## Operations - Field - Get all fields - File - Delete a file - Download a file - Record - Create a record - Delete a record - Get all records - Update a record - Upsert a record - Report - Get a report - Run a report ## Templates and examples [Browse Quick Base integration templates](https://n8n.io/integrations/quick-base/), or [search all templates](https://n8n.io/workflows/) # QuickBooks Online node Use the QuickBooks node to automate work in QuickBooks, and integrate QuickBooks with other applications. n8n has built-in support for a wide range of QuickBooks features, including creating, updating, deleting, and getting bills, customers, employees, estimates, and invoices. On this page, you'll find a list of operations the QuickBooks node supports and links to more resources. Credentials Refer to [QuickBooks credentials](../../credentials/quickbooks/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Bill - Create - Delete - Get - Get All - Update - Customer - Create - Get - Get All - Update - Employee - Create - Get - Get All - Update - Estimate - Create - Delete - Get - Get All - Send - Update - Invoice - Create - Delete - Get - Get All - Send - Update - Void - Item - Get - Get All - Payment - Create - Delete - Get - Get All - Send - Update - Void - Purchase - Get - Get All - Transaction - Get Report - Vendor - Create - Get - Get All - Update ## Templates and examples **Create a customer and send the invoice automatically** by Harshil Agrawal [View template details](https://n8n.io/workflows/949-create-a-customer-and-send-the-invoice-automatically/) **Create QuickBooks Online Customers With Sales Receipts For New Stripe Payments** by Artur [View template details](https://n8n.io/workflows/2807-create-quickbooks-online-customers-with-sales-receipts-for-new-stripe-payments/) **Full-cycle invoice automation: Airtable, QuickBooks & Stripe** by Intuz [View template details](https://n8n.io/workflows/7291-full-cycle-invoice-automation-airtable-quickbooks-and-stripe/) [Browse QuickBooks Online integration templates](https://n8n.io/integrations/quickbooks-online/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # QuickChart node Use the QuickChart node to automate work in QuickChart, and integrate QuickChart with other applications. n8n has built-in support for a wide range of QuickChart chart types, including bar, doughnut, line, pie, and polar charts. On this page, you'll find a list of operations the QuickChart node supports and links to more resources. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations Create a chart by selecting the chart type: - Chart Type - Bar Chart - Doughnut Chart - Line Chart - Pie Chart - Polar Chart ## Templates and examples **AI Agent with charts capabilities using OpenAI Structured Output and Quickchart** by Agent Studio [View template details](https://n8n.io/workflows/2400-ai-agent-with-charts-capabilities-using-openai-structured-output-and-quickchart/) **Visualize your SQL Agent queries with OpenAI and Quickchart.io** by Agent Studio [View template details](https://n8n.io/workflows/2559-visualize-your-sql-agent-queries-with-openai-and-quickchartio/) **✨📊Multi-AI Agent Chatbot for Postgres/Supabase DB and QuickCharts + Tool Router** by Joseph LePage [View template details](https://n8n.io/workflows/3090-multi-ai-agent-chatbot-for-postgressupabase-db-and-quickcharts-tool-router/) [Browse QuickChart integration templates](https://n8n.io/integrations/quickchart/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [QuickChart's API documentation](https://quickchart.io/documentation/) for more information about the service. # RabbitMQ node Use the RabbitMQ node to automate work in RabbitMQ, and integrate RabbitMQ with other applications. n8n has built-in support for a wide range of RabbitMQ features, including accepting, and forwarding messages. On this page, you'll find a list of operations the RabbitMQ node supports and links to more resources. Credentials Refer to [RabbitMQ credentials](../../credentials/rabbitmq/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Delete From Queue - Send a Message to RabbitMQ ## Templates and examples **Receive messages from a queue via RabbitMQ and send an SMS** by Harshil Agrawal [View template details](https://n8n.io/workflows/845-receive-messages-from-a-queue-via-rabbitmq-and-send-an-sms/) **Send updates about the position of the ISS every minute to a topic in RabbitMQ** by Harshil Agrawal [View template details](https://n8n.io/workflows/844-send-updates-about-the-position-of-the-iss-every-minute-to-a-topic-in-rabbitmq/) **Route client onboarding webhook events through RabbitMQ to Gmail alerts** by Venkata V [View template details](https://n8n.io/workflows/15316-route-client-onboarding-webhook-events-through-rabbitmq-to-gmail-alerts/) [Browse RabbitMQ integration templates](https://n8n.io/integrations/rabbitmq/), or [search all templates](https://n8n.io/workflows/) # Raindrop node Use the Raindrop node to automate work in Raindrop, and integrate Raindrop with other applications. n8n has built-in support for a wide range of Raindrop features, including getting users, deleting tags, and creating, updating, deleting and getting collections and bookmarks. On this page, you'll find a list of operations the Raindrop node supports and links to more resources. Credentials Refer to [Raindrop credentials](../../credentials/raindrop/) for guidance on setting up authentication. ## Operations - Bookmark - Create - Delete - Get - Get All - Update - Collection - Create - Delete - Get - Get All - Update - Tag - Delete - Get All - User - Get ## Templates and examples **Fetch a YouTube playlist and send new items Raindrop** by Alejandro AR [View template details](https://n8n.io/workflows/1217-fetch-a-youtube-playlist-and-send-new-items-raindrop/) **Create a collection and create, update, and get a bookmark in Raindrop** by Harshil Agrawal [View template details](https://n8n.io/workflows/959-create-a-collection-and-create-update-and-get-a-bookmark-in-raindrop/) **Save Mastodon Bookmarks to Raindrop Automatically** by Aymeric Besset [View template details](https://n8n.io/workflows/4800-save-mastodon-bookmarks-to-raindrop-automatically/) [Browse Raindrop integration templates](https://n8n.io/integrations/raindrop/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Reddit node Use the Reddit node to automate work in Reddit, and integrate Reddit with other applications. n8n has built-in support for a wide range of Reddit features, including getting profiles, and users, retrieving post comments and subreddit, as well as submitting, getting, and deleting posts. On this page, you'll find a list of operations the Reddit node supports and links to more resources. Credentials Refer to [Reddit credentials](../../credentials/reddit/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Post - Submit a post to a subreddit - Delete a post from a subreddit - Get a post from a subreddit - Get all posts from a subreddit - Search posts in a subreddit or in all of Reddit. - Post Comment - Create a top-level comment in a post - Retrieve all comments in a post - Remove a comment from a post - Write a reply to a comment in a post - Profile - Get - Subreddit - Retrieve background information about a subreddit. - Retrieve information about subreddits from all of Reddit. - User - Get ## Templates and examples **Analyze Reddit Posts with AI to Identify Business Opportunities** by Alex Huang [View template details](https://n8n.io/workflows/2978-analyze-reddit-posts-with-ai-to-identify-business-opportunities/) **Extract Trends, Auto-Generate Social Content with AI, Reddit, Google & Post** by Immanuel [View template details](https://n8n.io/workflows/3560-extract-trends-auto-generate-social-content-with-ai-reddit-google-and-post/) **Reddit AI digest** by n8n Team [View template details](https://n8n.io/workflows/1895-reddit-ai-digest/) [Browse Reddit integration templates](https://n8n.io/integrations/reddit/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Redis node Use the Redis node to automate work in Redis, and integrate Redis with other applications. n8n has built-in support for a wide range of Redis features, including deleting keys, getting key values, setting key value, and publishing messages to the Redis channel. On this page, you'll find a list of operations the Redis node supports and links to more resources. Credentials Refer to [Redis credentials](../../credentials/redis/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Delete a key from Redis. - Get the value of a key from Redis. - Returns generic information about the Redis instance. - Atomically increments a key by 1. Creates the key if it doesn't exist. - Returns all the keys matching a pattern. - Set the value of a key in Redis. - Publish message to Redis channel. ## Templates and examples **Build your own N8N Workflows MCP Server** by Jimleuk [View template details](https://n8n.io/workflows/3770-build-your-own-n8n-workflows-mcp-server/) **Conversational Interviews with AI Agents and n8n Forms** by Jimleuk [View template details](https://n8n.io/workflows/2566-conversational-interviews-with-ai-agents-and-n8n-forms/) **Advanced Telegram Bot, Ticketing System, LiveChat, User Management, Broadcasting** by Nskha [View template details](https://n8n.io/workflows/2045-advanced-telegram-bot-ticketing-system-livechat-user-management-broadcasting/) [Browse Redis integration templates](https://n8n.io/integrations/redis/), or [search all templates](https://n8n.io/workflows/) # Rocket.Chat node Use the Rocket.Chat node to automate work in Rocket.Chat, and integrate Rocket.Chat with other applications. n8n supports posting messages to channels, and sending direct messages, with Rocket.Chat. On this page, you'll find a list of operations the Rocket.Chat node supports and links to more resources. Credentials Refer to [Rocket.Chat credentials](../../credentials/rocketchat/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Chat - Post a message to a channel or a direct message ## Templates and examples **Post latest Twitter mentions to Slack** by Nisarag [View template details](https://n8n.io/workflows/617-post-latest-twitter-mentions-to-slack/) **Post a message to a channel in RocketChat** by tanaypant [View template details](https://n8n.io/workflows/462-post-a-message-to-a-channel-in-rocketchat/) **Render custom text over images** by tanaypant [View template details](https://n8n.io/workflows/365-render-custom-text-over-images/) [Browse Rocket.Chat integration templates](https://n8n.io/integrations/rocketchat/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Rundeck node Use the Rundeck node to automate work in Rundeck, and integrate Rundeck with other applications. n8n has built-in support for executing jobs and getting metadata. On this page, you'll find a list of operations the Rundeck node supports and links to more resources. Credentials Refer to [Rundeck credentials](../../credentials/rundeck/) for guidance on setting up authentication. ## Operations - **Job** - Execute a job - Get metadata of a job ## Templates and examples [Browse Rundeck integration templates](https://n8n.io/integrations/rundeck/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Find the job ID 1. Access your Rundeck dashboard. 1. Open the project that contains the job you want to use with n8n. 1. In the sidebar, select **JOBS**. 1. Under **All Jobs**, select the name of the job you want to use with n8n. 1. In the top left corner, under the name of the job, copy the string that's displayed in smaller font below the job name. This is your job ID. 1. Paste this job ID in the **Job Id** field in n8n. # S3 node Use the S3 node to automate work in non-AWS S3 storage and integrate S3 with other applications. n8n has built-in support for a wide range of S3 features, including creating, deleting, and getting buckets, files, and folders. For AWS S3, use [AWS S3](../n8n-nodes-base.awss3/). Use the S3 node for non-AWS S3 solutions like: - [MinIO](https://min.io/) - [Wasabi](https://wasabi.com/) - [Digital Ocean spaces](https://www.digitalocean.com/products/spaces) On this page, you'll find a list of operations the S3 node supports and links to more resources. Credentials Refer to [S3 credentials](../../credentials/s3/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Bucket - Create a bucket - Delete a bucket - Get all buckets - Search within a bucket - File - Copy a file - Delete a file - Download a file - Get all files - Upload a file Attach file for upload To attach a file for upload, use another node to pass the file as a data property. Nodes like the [Read/Write Files from Disk](../../core-nodes/n8n-nodes-base.readwritefile/) node or the [HTTP Request](../../core-nodes/n8n-nodes-base.httprequest/) work well. - Folder - Create a folder - Delete a folder - Get all folders ## Templates and examples **Flux AI Image Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2417-flux-ai-image-generator/) **Hacker News to Video Content** by Alex Kim [View template details](https://n8n.io/workflows/2557-hacker-news-to-video-content/) **Transcribe audio files from Cloud Storage** by Lorena [View template details](https://n8n.io/workflows/1394-transcribe-audio-files-from-cloud-storage/) [Browse S3 integration templates](https://n8n.io/integrations/s3/), or [search all templates](https://n8n.io/workflows/) ## Node reference ### Setting file permissions in Wasabi When uploading files to [Wasabi](https://wasabi.com/), you must set permissions for the files using the **ACL** dropdown and not the toggles. # Salesforce node Use the Salesforce node to automate work in Salesforce, and integrate Salesforce with other applications. n8n has built-in support for a wide range of Salesforce features, including creating, updating, deleting, and getting accounts, attachments, cases, and leads, as well as uploading documents. On this page, you'll find a list of operations the Salesforce node supports and links to more resources. Credentials Refer to [Salesforce credentials](../../credentials/salesforce/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Account - Add note to an account - Create an account - Create a new account, or update the current one if it already exists (upsert) - Get an account - Get all accounts - Returns an overview of account's metadata. - Delete an account - Update an account - Attachment - Create a attachment - Delete a attachment - Get a attachment - Get all attachments - Returns an overview of attachment's metadata. - Update a attachment - Case - Add a comment to a case - Create a case - Get a case - Get all cases - Returns an overview of case's metadata - Delete a case - Update a case - Contact - Add lead to a campaign - Add note to a contact - Create a contact - Create a new contact, or update the current one if it already exists (upsert) - Delete a contact - Get a contact - Returns an overview of contact's metadata - Get all contacts - Update a contact - Custom Object - Create a custom object record - Create a new record, or update the current one if it already exists (upsert) - Get a custom object record - Get all custom object records - Delete a custom object record - Update a custom object record - Document - Upload a document - Flow - Get all flows - Invoke a flow - Lead - Add lead to a campaign - Add note to a lead - Create a lead - Create a new lead, or update the current one if it already exists (upsert) - Delete a lead - Get a lead - Get all leads - Returns an overview of Lead's metadata - Update a lead - Opportunity - Add note to an opportunity - Create an opportunity - Create a new opportunity, or update the current one if it already exists (upsert) - Delete an opportunity - Get an opportunity - Get all opportunities - Returns an overview of opportunity's metadata - Update an opportunity - Search - Execute a SOQL query that returns all the results in a single response - Task - Create a task - Delete a task - Get a task - Get all tasks - Returns an overview of task's metadata - Update a task - User - Get a user - Get all users ## Templates and examples **Create and update lead in Salesforce** by amudhan [View template details](https://n8n.io/workflows/664-create-and-update-lead-in-salesforce/) **Create Salesforce accounts based on Google Sheets data** by Tom [View template details](https://n8n.io/workflows/1792-create-salesforce-accounts-based-on-google-sheets-data/) **Create Salesforce accounts based on Excel 365 data** by Tom [View template details](https://n8n.io/workflows/1793-create-salesforce-accounts-based-on-excel-365-data/) [Browse Salesforce integration templates](https://n8n.io/integrations/salesforce/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Working with Salesforce custom fields To add custom fields to your request: 1. Select **Additional Fields** > **Add Field**. 1. In the dropdown, select **Custom Fields**. You can then find and add your custom fields. # Salesmate node Use the Salesmate node to automate work in Salesmate, and integrate Salesmate with other applications. n8n has built-in support for a wide range of Salesmate features, including creating, updating, deleting, and getting activities, companies, and deals. On this page, you'll find a list of operations the Salesmate node supports and links to more resources. Credentials Refer to [Salesmate credentials](../../credentials/salesmate/) for guidance on setting up authentication. ## Operations - Activity - Create an activity - Delete an activity - Get an activity - Get all companies - Update an activity - Company - Create a company - Delete a company - Get a company - Get all companies - Update a company - Deal - Create a deal - Delete a deal - Get a deal - Get all deals - Update a deal ## Templates and examples [Browse Salesmate integration templates](https://n8n.io/integrations/salesmate/), or [search all templates](https://n8n.io/workflows/) # SeaTable node Use the SeaTable node to automate work in SeaTable, and integrate SeaTable with other applications. n8n has built-in support for a wide range of SeaTable features, including creating, updating, deleting, updating, and getting rows. On this page, you'll find a list of operations the SeaTable node supports and links to more resources. Credentials Refer to [SeaTable credentials](../../credentials/seatable/) for guidance on setting up authentication. ## Operations - Row - Create - Delete - Get - Get All - Update ## Templates and examples [Browse SeaTable integration templates](https://n8n.io/integrations/seatable/), or [search all templates](https://n8n.io/workflows/) # SecurityScorecard node Use the SecurityScorecard node to automate work in SecurityScorecard, and integrate SecurityScorecard with other applications. n8n has built-in support for a wide range of SecurityScorecard features, including creating, updating, deleting, and getting portfolio, as well as getting a company's data. On this page, you'll find a list of operations the SecurityScorecard node supports and links to more resources. Credentials Refer to [SecurityScorecard credentials](../../credentials/securityscorecard/) for guidance on setting up authentication. ## Operations - Company - Get company factor scores and issue counts - Get company's historical factor scores - Get company's historical scores - Get company information and summary of their scorecard - Get company's score improvement plan - Industry - Get Factor Scores - Get Historical Factor Scores - Get Score - Invite - Create an invite for a company/user - Portfolio - Create a portfolio - Delete a portfolio - Get all portfolios - Update a portfolio - Portfolio Company - Add a company to portfolio - Get all companies in a portfolio - Remove a company from portfolio - Report - Download a generated report - Generate a report - Get list of recently generated report ## Templates and examples [Browse SecurityScorecard integration templates](https://n8n.io/integrations/securityscorecard/), or [search all templates](https://n8n.io/workflows/) # Segment node Use the Segment node to automate work in Segment, and integrate Segment with other applications. n8n has built-in support for a wide range of Segment features, including adding users to groups, creating identities, and tracking activities. On this page, you'll find a list of operations the Segment node supports and links to more resources. Credentials Refer to [Segment credentials](../../credentials/segment/) for guidance on setting up authentication. ## Operations - Group - Add a user to a group - Identify - Create an identity - Track - Record the actions your users perform. Every action triggers an event, which can also have associated properties. - Record page views on your website, along with optional extra information about the page being viewed. ## Templates and examples **Auto-Scrape TikTok User Data via Dumpling AI and Segment in Airtable** by Yang [View template details](https://n8n.io/workflows/4326-auto-scrape-tiktok-user-data-via-dumpling-ai-and-segment-in-airtable/) **Weekly Google Search Console SEO Pulse: Catch Top Movers Across Keyword Segments** by MattF [View template details](https://n8n.io/workflows/6006-weekly-google-search-console-seo-pulse-catch-top-movers-across-keyword-segments/) **Create a customer and add them to a segment in Customer.io** by Harshil Agrawal [View template details](https://n8n.io/workflows/646-create-a-customer-and-add-them-to-a-segment-in-customerio/) [Browse Segment integration templates](https://n8n.io/integrations/segment/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # SendGrid node Use the SendGrid node to automate work in SendGrid, and integrate SendGrid with other applications. n8n has built-in support for a wide range of SendGrid features, including creating, updating, deleting, and getting contacts, and lists, as well as sending emails. On this page, you'll find a list of operations the SendGrid node supports and links to more resources. Credentials Refer to [SendGrid credentials](../../credentials/sendgrid/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Contact - Create/update a contact - Delete a contact - Get a contact by ID - Get all contacts - List - Create a list - Delete a list - Get a list - Get all lists - Update a list - Mail - Send an email. ## Templates and examples **Track investments using Baserow and n8n** by Tom [View template details](https://n8n.io/workflows/1785-track-investments-using-baserow-and-n8n/) **Automated Email Optin Form with n8n and Hunter io for verification** by Keith Rumjahn [View template details](https://n8n.io/workflows/2709-automated-email-optin-form-with-n8n-and-hunter-io-for-verification/) **Add contacts to SendGrid automatically** by Harshil Agrawal [View template details](https://n8n.io/workflows/901-add-contacts-to-sendgrid-automatically/) [Browse SendGrid integration templates](https://n8n.io/integrations/sendgrid/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Sendy node Use the Sendy node to automate work in Sendy, and integrate Sendy with other applications. n8n has built-in support for a wide range of Sendy features, including creating campaigns, and adding, counting, deleting, and getting subscribers. On this page, you'll find a list of operations the Sendy node supports and links to more resources. Credentials Refer to [Sendy credentials](../../credentials/sendy/) for guidance on setting up authentication. ## Operations - Campaign - Create a campaign - Subscriber - Add a subscriber to a list - Count subscribers - Delete a subscriber from a list - Unsubscribe user from a list - Get the status of subscriber ## Templates and examples **Send automated campaigns in Sendy** by Harshil Agrawal [View template details](https://n8n.io/workflows/727-send-automated-campaigns-in-sendy/) **Enviar Miembros del CMS Ghost hacia Newsletter Sendy** by The { AI } rtist [View template details](https://n8n.io/workflows/629-enviar-miembros-del-cms-ghost-hacia-newsletter-sendy/) **🛠️ Sendy Tool MCP Server 💪 6 operations** by David Ashby [View template details](https://n8n.io/workflows/5347-sendy-tool-mcp-server-6-operations/) [Browse Sendy integration templates](https://n8n.io/integrations/sendy/), or [search all templates](https://n8n.io/workflows/) # Sentry.io node Use the Sentry.io node to automate work in Sentry.io, and integrate Sentry.io with other applications. n8n has built-in support for a wide range of Sentry.io features, including creating, updating, deleting, and getting, issues, projects, and releases, as well as getting all events. On this page, you'll find a list of operations the Sentry.io node supports and links to more resources. Credentials Refer to [Sentry.io credentials](../../credentials/sentryio/) for guidance on setting up authentication. ## Operations - Event - Get event by ID - Get all events - Issue - Delete an issue - Get issue by ID - Get all issues - Update an issue - Project - Create a new project - Delete a project - Get project by ID - Get all projects - Update a project - Release - Create a release - Delete a release - Get release by version identifier - Get all releases - Update a release - Organization - Create an organization - Get organization by slug - Get all organizations - Update an organization - Team - Create a new team - Delete a team - Get team by slug - Get all teams - Update a team ## Templates and examples [Browse Sentry.io integration templates](https://n8n.io/integrations/sentryio/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Sentry.io's documentation](https://docs.sentry.io/api/) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # ServiceNow node Use the ServiceNow node to automate work in ServiceNow, and integrate ServiceNow with other applications. n8n has built-in support for a wide range of ServiceNow features, including getting business services, departments, configuration items, and dictionary as well as creating, updating, and deleting incidents, users, and table records. On this page, you'll find a list of operations the ServiceNow node supports and links to more resources. Credentials Refer to [ServiceNow credentials](../../credentials/servicenow/) for guidance on setting up authentication. ## Operations - Business Service - Get All - Configuration Items - Get All - Department - Get All - Dictionary - Get All - Incident - Create - Delete - Get - Get All - Update - Table Record - Create - Delete - Get - Get All - Update - User - Create - Delete - Get - Get All - Update - User Group - Get All - User Role - Get All ## Templates and examples **ServiceNow Incident Notifications to Slack Workflow** by Angel Menendez [View template details](https://n8n.io/workflows/2704-servicenow-incident-notifications-to-slack-workflow/) **List recent ServiceNow Incidents in Slack Using Pop Up Modal** by Angel Menendez [View template details](https://n8n.io/workflows/2728-list-recent-servicenow-incidents-in-slack-using-pop-up-modal/) **Display ServiceNow Incident Details in Slack using Slash Commands** by Angel Menendez [View template details](https://n8n.io/workflows/2727-display-servicenow-incident-details-in-slack-using-slash-commands/) [Browse ServiceNow integration templates](https://n8n.io/integrations/servicenow/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Shopify node Use the Shopify node to automate work in Shopify, and integrate Shopify with other applications. n8n has built-in support for a wide range of Shopify features, including creating, updating, deleting, and getting orders and products. On this page, you'll find a list of operations the Shopify node supports and links to more resources. Credentials Refer to [Shopify credentials](../../credentials/shopify/) for guidance on setting up authentication. ## Operations - Order - Create an order - Delete an order - Get an order - Get all orders - Update an order - Product - Create a product - Delete a product - Get a product - Get all products - Update a product ## Templates and examples **Promote new Shopify products on Twitter and Telegram** by Lorena [View template details](https://n8n.io/workflows/1205-promote-new-shopify-products-on-twitter-and-telegram/) **Run weekly inventories on Shopify sales** by Lorena [View template details](https://n8n.io/workflows/1207-run-weekly-inventories-on-shopify-sales/) **Process Shopify new orders with Zoho CRM and Harvest** by Lorena [View template details](https://n8n.io/workflows/1206-process-shopify-new-orders-with-zoho-crm-and-harvest/) [Browse Shopify integration templates](https://n8n.io/integrations/shopify/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # SIGNL4 node Use the SIGNL4 node to automate work in SIGNL4, and integrate SIGNL4 with other applications. n8n supports sending and resolving alerts with SIGNL4. On this page, you'll find a list of operations the SIGNL4 node supports and links to more resources. Credentials Refer to [SIGNL4 credentials](../../credentials/signl4/) for guidance on setting up authentication. ## Operations - Alert - Send an alert - Resolve an alert ## Templates and examples **Monitor a file for changes and send an alert** by Ron [View template details](https://n8n.io/workflows/967-monitor-a-file-for-changes-and-send-an-alert/) **Send weather alerts to your mobile phone with OpenWeatherMap and SIGNL4** by Ron [View template details](https://n8n.io/workflows/966-send-weather-alerts-to-your-mobile-phone-with-openweathermap-and-signl4/) **Send TheHive Alerts Using SIGNL4** by Ron [View template details](https://n8n.io/workflows/1630-send-thehive-alerts-using-signl4/) [Browse SIGNL4 integration templates](https://n8n.io/integrations/signl4/), or [search all templates](https://n8n.io/workflows/) # Slack node Use the Slack node to automate work in Slack, and integrate Slack with other applications. n8n has built-in support for a wide range of Slack features, including creating, archiving, and closing channels, getting users and files, as well as deleting messages. On this page, you'll find a list of operations the Slack node supports and links to more resources. Credentials Refer to [Slack credentials](../../credentials/slack/) for guidance on setting up authentication. Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - **Channel** - **Archive** a channel. - **Close** a direct message or multi-person direct message. - **Create** a public or private channel-based conversation. - **Get** information about a channel. - **Get Many**: Get a list of channels in Slack. - **History**: Get a channel's history of messages and events. - **Invite** a user to a channel. - **Join** an existing channel. - **Kick**: Remove a user from a channel. - **Leave** a channel. - **Member**: List the members of a channel. - **Open** or resume a direct message or multi-person direct message. - **Rename** a channel. - **Replies**: Get a thread of messages posted to a channel. - **Sets purpose** of a channel. - **Sets topic** of a channel. - **Unarchive** a channel. - **File** - **Get** a file. - **Get Many**: Get and filter team files. - **Upload**: Create or upload an existing file. - **Message** - **Delete** a message - **Get permalink**: Get a message's permalink. - **Search** for messages - **Send** a message - **Send and Wait for Response**: Send a message and wait for a response from the recipient before continuing. - **Update** a message - **Reaction** - **Add** a reaction to a message. - **Get** a message's reactions. - **Remove** a reaction from a message. - **Star** - **Add** a star to an item. - **Delete** a star from an item. - **Get Many**: Get a list of an authenticated user's stars. - **User** - **Get** information about a user. - **Get Many**: Get a list of users. - **Get User's Profile**. - **Get User's Status**. - **Update User's Profile**. - **User Group** - **Create** a user group. - **Disable** a user group. - **Enable** a user group. - **Get Many**: Get a list of user groups. - **Update** a user group. ## Templates and examples **Back Up Your n8n Workflows To Github** by Jonathan [View template details](https://n8n.io/workflows/1534-back-up-your-n8n-workflows-to-github/) **Slack chatbot powered by AI** by n8n Team [View template details](https://n8n.io/workflows/1961-slack-chatbot-powered-by-ai/) **IT Ops AI SlackBot Workflow - Chat with your knowledge base** by Angel Menendez [View template details](https://n8n.io/workflows/2397-it-ops-ai-slackbot-workflow-chat-with-your-knowledge-base/) [Browse Slack integration templates](https://n8n.io/integrations/slack/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Slack's documentation](https://api.slack.com/) for more information about the service. ## Required scopes Once you create a Slack app for your [Slack credentials](../../credentials/slack/), you must add the appropriate scopes to your Slack app for this node to work. Start with the scopes listed in the [Scopes | Slack credentials](../../credentials/slack/#scopes) page. If those aren't enough, use the table below to look up the resource and operation you want to use, then follow the link to Slack's API documentation to find the correct scopes. | **Resource** | **Operation** | **Slack API method** | | ------------ | -------------------------- | ---------------------------------------------------------------------------------- | | Channel | Archive | [conversations.archive](https://api.slack.com/methods/conversations.archive) | | Channel | Close | [conversations.close](https://api.slack.com/methods/conversations.close) | | Channel | Create | [conversations.create](https://api.slack.com/methods/conversations.create) | | Channel | Get | [conversations.info](https://api.slack.com/methods/conversations.info) | | Channel | Get Many | [conversations.list](https://api.slack.com/methods/conversations.list) | | Channel | History | [conversations.history](https://api.slack.com/methods/conversations.history) | | Channel | Invite | [conversations.invite](https://api.slack.com/methods/conversations.invite) | | Channel | Join | [conversations.join](https://api.slack.com/methods/conversations.join) | | Channel | Kick | [conversations.kick](https://api.slack.com/methods/conversations.kick) | | Channel | Leave | [conversations.leave](https://api.slack.com/methods/conversations.leave) | | Channel | Member | [conversations.members](https://api.slack.com/methods/conversations.members) | | Channel | Open | [conversations.open](https://api.slack.com/methods/conversations.open) | | Channel | Rename | [conversations.rename](https://api.slack.com/methods/conversations.rename) | | Channel | Replies | [conversations.replies](https://api.slack.com/methods/conversations.replies) | | Channel | Set Purpose | [conversations.setPurpose](https://api.slack.com/methods/conversations.setPurpose) | | Channel | Set Topic | [conversations.setTopic](https://api.slack.com/methods/conversations.setTopic) | | Channel | Unarchive | [conversations.unarchive](https://api.slack.com/methods/conversations.unarchive) | | File | Get | [files.info](https://api.slack.com/methods/files.info) | | File | Get Many | [files.list](https://api.slack.com/methods/files.list) | | File | Upload | [files.upload](https://api.slack.com/methods/files.upload) | | Message | Delete | [chat.delete](https://api.slack.com/methods/chat.delete) | | Message | Get Permalink | [chat.getPermalink](https://api.slack.com/methods/chat.getPermalink) | | Message | Search | [search.messages](https://api.slack.com/methods/search.messages) | | Message | Send | [chat.postMessage](https://api.slack.com/methods/chat.postMessage) | | Message | Send and Wait for Response | [chat.postMessage](https://api.slack.com/methods/chat.postMessage) | | Message | Update | [chat.update](https://api.slack.com/methods/chat.update) | | Reaction | Add | [reactions.add](https://api.slack.com/methods/reactions.add) | | Reaction | Get | [reactions.get](https://api.slack.com/methods/reactions.get) | | Reaction | Remove | [reactions.remove](https://api.slack.com/methods/reactions.remove) | | Star | Add | [stars.add](https://api.slack.com/methods/stars.add) | | Star | Delete | [stars.remove](https://api.slack.com/methods/stars.remove) | | Star | Get Many | [stars.list](https://api.slack.com/methods/stars.list) | | User | Get | [users.info](https://api.slack.com/methods/users.info) | | User | Get Many | [users.list](https://api.slack.com/methods/users.list) | | User | Get User's Profile | [users.profile.get](https://api.slack.com/methods/users.profile.get) | | User | Get User's Status | [users.getPresence](https://api.slack.com/methods/users.getPresence) | | User | Update User's Profile | [users.profile.set](https://api.slack.com/methods/users.profile.set) | | User Group | Create | [usergroups.create](https://api.slack.com/methods/usergroups.create) | | User Group | Disable | [usergroups.disable](https://api.slack.com/methods/usergroups.disable) | | User Group | Enable | [usergroups.enable](https://api.slack.com/methods/usergroups.enable) | | User Group | Get Many | [usergroups.list](https://api.slack.com/methods/usergroups.list) | | User Group | Update | [usergroups.update](https://api.slack.com/methods/usergroups.update) | ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # seven node Use the seven node to automate work in seven, and integrate seven with other applications. n8n has built-in support for a wide range of seven features, including sending SMS, and converting text to voice. On this page, you'll find a list of operations the seven node supports and links to more resources. Credentials Refer to [seven credentials](../../credentials/sms77/) for guidance on setting up authentication. ## Operations - SMS - Send SMS - Voice Call - Converts text to voice and calls a given number ## Templates and examples **Automate WhatsApp Booking System with GPT-4 Assistant, Cal.com and SMS Reminders** by Dr. Firas [View template details](https://n8n.io/workflows/4949-automate-whatsapp-booking-system-with-gpt-4-assistant-calcom-and-sms-reminders/) **Sending an SMS using sms77** by tanaypant [View template details](https://n8n.io/workflows/469-sending-an-sms-using-sms77/) **🛠️ seven Tool MCP Server with both available operations** by David Ashby [View template details](https://n8n.io/workflows/5088-seven-tool-mcp-server-with-both-available-operations/) [Browse seven integration templates](https://n8n.io/integrations/sms77/), or [search all templates](https://n8n.io/workflows/) # Snowflake node Use the Snowflake node to automate work in Snowflake, and integrate Snowflake with other applications. n8n has built-in support for a wide range of Snowflake features, including executing SQL queries, and inserting rows in a database. On this page, you'll find a list of operations the Snowflake node supports and links to more resources. Credentials Refer to [Snowflake credentials](../../credentials/snowflake/) for guidance on setting up authentication. ## Operations - Execute an SQL query. - Insert rows in database. - Update rows in database. ## Templates and examples **Load data into Snowflake** by n8n Team [View template details](https://n8n.io/workflows/1918-load-data-into-snowflake/) **Create a table, and insert and update data in the table in Snowflake** by Harshil Agrawal [View template details](https://n8n.io/workflows/824-create-a-table-and-insert-and-update-data-in-the-table-in-snowflake/) **Import Productboard Notes, Companies and Features into Snowflake** by Romain Jouhannet [View template details](https://n8n.io/workflows/2576-import-productboard-notes-companies-and-features-into-snowflake/) [Browse Snowflake integration templates](https://n8n.io/integrations/snowflake/), or [search all templates](https://n8n.io/workflows/) # Splunk node Use the Splunk node to automate work in Splunk, and integrate Splunk with other applications. n8n has built-in support for a wide range of Splunk features, including getting fired alerts reports, as well as deleting and getting search configuration. On this page, you'll find a list of operations the Splunk node supports and links to more resources. Credentials Refer to [Splunk credentials](../../credentials/splunk/) for guidance on setting up authentication. ## Operations - Fired Alert - Get a fired alerts report - Search Configuration - Delete a search configuration - Get a search configuration - Get many search configurations - Search Job - Create a search job - Delete a search job - Get a search job - Get many search jobs - Search Result - Get many search results - User - Create a user - Delete a user - Get a user - Get many users - Update a user ## Templates and examples **Create Unique Jira tickets from Splunk alerts** by n8n Team [View template details](https://n8n.io/workflows/1970-create-unique-jira-tickets-from-splunk-alerts/) **🛠️ Splunk Tool MCP Server 💪 all 16 operations** by David Ashby [View template details](https://n8n.io/workflows/5359-splunk-tool-mcp-server-all-16-operations/) **IP Reputation Check & SOC Alerts with Splunk, VirusTotal and AlienVault** by Rajneesh Gupta [View template details](https://n8n.io/workflows/6037-ip-reputation-check-and-soc-alerts-with-splunk-virustotal-and-alienvault/) [Browse Splunk integration templates](https://n8n.io/integrations/splunk/), or [search all templates](https://n8n.io/workflows/) # Spotify node Use the Spotify node to automate work in Spotify, and integrate Spotify with other applications. n8n has built-in support for a wide range of Spotify features, including getting album and artist information. On this page, you'll find a list of operations the Spotify node supports and links to more resources. Credentials Refer to [Spotify credentials](../../credentials/spotify/) for guidance on setting up authentication. ## Operations - Album - Get an album by URI or ID. - Get a list of new album releases. - Get an album's tracks by URI or ID. - Search albums by keyword. - Artist - Get an artist by URI or ID. - Get an artist's albums by URI or ID. - Get an artist's related artists by URI or ID. - Get an artist's top tracks by URI or ID. - Search artists by keyword. - Library - Get the user's liked tracks. - My Data - Get your followed artists. - Player - Add a song to your queue. - Get your currently playing track. - Skip to your next track. - Pause your music. - Skip to your previous song. - Get your recently played tracks. - Resume playback on the current active device. - Set volume on the current active device. - Start playing a playlist, artist, or album. - Playlist - Add tracks from a playlist by track and playlist URI or ID. - Create a new playlist. - Get a playlist by URI or ID. - Get a playlist's tracks by URI or ID. - Get a user's playlists. - Remove tracks from a playlist by track and playlist URI or ID. - Search playlists by keyword. - Track - Get a track by its URI or ID. - Get audio features for a track by URI or ID. - Search tracks by keyword ## Templates and examples **Add liked songs to a Spotify monthly playlist** by Lucas [View template details](https://n8n.io/workflows/1074-add-liked-songs-to-a-spotify-monthly-playlist/) **IOT Button Remote / Spotify Control Integration with MQTT** by Hubschrauber [View template details](https://n8n.io/workflows/2383-iot-button-remote-spotify-control-integration-with-mqtt/) **Download recently liked songs automatically with Spotify** by Mario [View template details](https://n8n.io/workflows/2285-download-recently-liked-songs-automatically-with-spotify/) [Browse Spotify integration templates](https://n8n.io/integrations/spotify/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Stackby node Use the Stackby node to automate work in Stackby, and integrate Stackby with other applications. n8n has built-in support for a wide range of Stackby features, including appending, deleting, listing and reading. On this page, you'll find a list of operations the Stackby node supports and links to more resources. Credentials Refer to [Stackby credentials](../../credentials/stackby/) for guidance on setting up authentication. ## Operations - Append - Delete - List - Read ## Templates and examples [Browse Stackby integration templates](https://n8n.io/integrations/stackby/), or [search all templates](https://n8n.io/workflows/) # Storyblok node Use the Storyblok node to automate work in Storyblok, and integrate Storyblok with other applications. n8n has built-in support for a wide range of Storyblok features, including getting, deleting, and publishing stories. On this page, you'll find a list of operations the Storyblok node supports and links to more resources. Credentials Refer to [Storyblok credentials](../../credentials/storyblok/) for guidance on setting up authentication. ## Operations ### Content API - **Story** - Get a story - Get all stories ### Management API - **Story** - Delete a story - Get a story - Get all stories - Publish a story - Unpublish a story ## Templates and examples [Browse Storyblok integration templates](https://n8n.io/integrations/storyblok/), or [search all templates](https://n8n.io/workflows/) # Strapi node Use the Strapi node to automate work in Strapi, and integrate Strapi with other applications. n8n has built-in support for a wide range of Strapi features, including creating and deleting entries. On this page, you'll find a list of operations the Strapi node supports and links to more resources. Credentials Refer to [Strapi credentials](../../credentials/strapi/) for guidance on setting up authentication. ## Operations - Entry - Create - Delete - Get - Get Many - Update ## Templates and examples **Enrich FAQ sections on your website pages at scale with AI** by Polina Medvedieva [View template details](https://n8n.io/workflows/2434-enrich-faq-sections-on-your-website-pages-at-scale-with-ai/) **Create, update, and get an entry in Strapi** by Harshil Agrawal [View template details](https://n8n.io/workflows/779-create-update-and-get-an-entry-in-strapi/) **Automate testimonials in Strapi with n8n** by Tom [View template details](https://n8n.io/workflows/1535-automate-testimonials-in-strapi-with-n8n/) [Browse Strapi integration templates](https://n8n.io/integrations/strapi/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Strapi's documentation](https://docs.strapi.io/dev-docs/api/rest) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Strava node Use the Strava node to automate work in Strava, and integrate Strava with other applications. n8n has built-in support for a wide range of Strava features, including creating new activities, and getting activity information. On this page, you'll find a list of operations the Strava node supports and links to more resources. Credentials Refer to [Strava credentials](../../credentials/strava/) for guidance on setting up authentication. ## Operations - Activity - Create a new activity - Get an activity - Get all activities - Get all activity comments - Get all activity kudos - Get all activity laps - Get all activity zones - Update an activity ## Templates and examples **AI Fitness Coach Strava Data Analysis and Personalized Training Insights** by Amjid Ali [View template details](https://n8n.io/workflows/2790-ai-fitness-coach-strava-data-analysis-and-personalized-training-insights/) **Export all Strava Activity Data to Google Sheets** by Sherlockes [View template details](https://n8n.io/workflows/2678-export-all-strava-activity-data-to-google-sheets/) **Receive updates when a new activity gets created and tweet about it** by Harshil Agrawal [View template details](https://n8n.io/workflows/745-receive-updates-when-a-new-activity-gets-created-and-tweet-about-it/) [Browse Strava integration templates](https://n8n.io/integrations/strava/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Stripe node Use the Stripe node to automate work in Stripe, and integrate Stripe with other applications. n8n has built-in support for a wide range of Stripe features, including getting balance, creating charge and meter events, and deleting customers. On this page, you'll find a list of operations the Stripe node supports and links to more resources. Credentials Refer to [Stripe credentials](../../credentials/stripe/) for guidance on setting up authentication. ## Operations - Balance - Get a balance - Charge - Create a charge - Get a charge - Get all charges - Update a charge - Coupon - Create a coupon - Get all coupons - Customer - Create a customer - Delete a customer - Get a customer - Get all customers - Update a customer - Customer Card - Add a customer card - Get a customer card - Remove a customer card - Meter Event - Create a meter event - Source - Create a source - Delete a source - Get a source - Token - Create a token ## Templates and examples **Update HubSpot when a new invoice is registered in Stripe** by Jonathan [View template details](https://n8n.io/workflows/1468-update-hubspot-when-a-new-invoice-is-registered-in-stripe/) **Simplest way to create a Stripe Payment Link** by Emmanuel Bernard - n8n Expert Lausanne [View template details](https://n8n.io/workflows/2195-simplest-way-to-create-a-stripe-payment-link/) **Streamline Your Zoom Meetings with Secure, Automated Stripe Payments** by Emmanuel Bernard - n8n Expert Lausanne [View template details](https://n8n.io/workflows/2192-streamline-your-zoom-meetings-with-secure-automated-stripe-payments/) [Browse Stripe integration templates](https://n8n.io/integrations/stripe/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # SyncroMSP node Use the SyncroMSP node to automate work in SyncroMSP, and integrate SyncroMSP with other applications. n8n has built-in support for a wide range of SyncroMSP features, including creating and deleting new customers, tickets, and contacts. On this page, you'll find a list of operations the SyncroMSP node supports and links to more resources. Credentials Refer to [SyncroMSP credentials](../../credentials/syncromsp/) for guidance on setting up authentication. ## Operations - Contact - Create new contact - Delete contact - Retrieve contact - Retrieve all contacts - Update contact - Customer - Create new customer - Delete customer - Retrieve customer - Retrieve all customers - Update customer - RMM - Create new RMM Alert - Delete RMM Alert - Retrieve RMM Alert - Retrieve all RMM Alerts - Mute RMM Alert - Ticket - Create new ticket - Delete ticket - Retrieve ticket - Retrieve all tickets - Update ticket ## Templates and examples [Browse SyncroMSP integration templates](https://n8n.io/integrations/syncromsp/), or [search all templates](https://n8n.io/workflows/) # Taiga node Use the Taiga node to automate work in Taiga, and integrate Taiga with other applications. n8n has built-in support for a wide range of Taiga features, including creating, updating, deleting, and getting issues. On this page, you'll find a list of operations the Taiga node supports and links to more resources. Credentials Refer to [Taiga credentials](../../credentials/taiga/) for guidance on setting up authentication. ## Operations - **Issue** - Create an issue - Delete an issue - Get an issue - Get all issues - Update an issue ## Templates and examples **Create, update, and get an issue on Taiga** by Harshil Agrawal [View template details](https://n8n.io/workflows/685-create-update-and-get-an-issue-on-taiga/) **Receive updates when an event occurs in Taiga** by Harshil Agrawal [View template details](https://n8n.io/workflows/686-receive-updates-when-an-event-occurs-in-taiga/) **Automate Service Ticket Triage with GPT-4o & Taiga** by Eric Mooney [View template details](https://n8n.io/workflows/4665-automate-service-ticket-triage-with-gpt-4o-and-taiga/) [Browse Taiga integration templates](https://n8n.io/integrations/taiga/), or [search all templates](https://n8n.io/workflows/) # Tapfiliate node Use the Tapfiliate node to automate work in Tapfiliate, and integrate Tapfiliate with other applications. n8n has built-in support for a wide range of Tapfiliate features, including creating and deleting affiliates, and adding affiliate metadata. On this page, you'll find a list of operations the Tapfiliate node supports and links to more resources. Credentials Refer to [Tapfiliate credentials](../../credentials/tapfiliate/) for guidance on setting up authentication. ## Operations - Affiliate - Create an affiliate - Delete an affiliate - Get an affiliate by ID - Get all affiliates - Affiliate Metadata - Add metadata to affiliate - Remove metadata from affiliate - Update affiliate's metadata - Program Affiliate - Add affiliate to program - Approve an affiliate for a program - Disapprove an affiliate - Get an affiliate in a program - Get all affiliates in program ## Templates and examples [Browse Tapfiliate integration templates](https://n8n.io/integrations/tapfiliate/), or [search all templates](https://n8n.io/workflows/) # TheHive node Use the TheHive node to automate work in TheHive, and integrate TheHive with other applications. n8n has built-in support for a wide range of TheHive features, including creating alerts, counting tasks logs, cases, and observables. On this page, you'll find a list of operations the TheHive node supports and links to more resources. TheHive and TheHive 5 n8n provides two nodes for TheHive. Use this node (TheHive) if you want to use TheHive's version 3 or 4 API. If you want to use version 5, use [TheHive 5](../n8n-nodes-base.thehive5/). Credentials Refer to [TheHive credentials](../../credentials/thehive/) for guidance on setting up authentication. ## Operations The available operations depend on your API version. To see the operations list, create your credentials, including selecting your API version. Then return to the node, select the resource you want to use, and n8n displays the available operations for your API version. - Alert - Case - Log - Observable - Task ## Templates and examples **Analyze emails with S1EM** by v1d1an [View template details](https://n8n.io/workflows/1602-analyze-emails-with-s1em/) **Weekly Shodan Query - Report Accidents** by n8n Team [View template details](https://n8n.io/workflows/1977-weekly-shodan-query-report-accidents/) **Create, update and get a case in TheHive** by Harshil Agrawal [View template details](https://n8n.io/workflows/808-create-update-and-get-a-case-in-thehive/) [Browse TheHive integration templates](https://n8n.io/integrations/thehive/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Related resources n8n provides a trigger node for TheHive. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.thehivetrigger/). Refer to TheHive's documentation for more information about the service: - [Version 3](https://docs.thehive-project.org/thehive/legacy/thehive3/api/) - [Version 4](https://docs.thehive-project.org/cortex/api/api-guide/) # TheHive 5 node Use the TheHive 5 node to automate work in TheHive, and integrate TheHive with other applications. n8n has built-in support for a wide range of TheHive features, including creating alerts, counting tasks logs, cases, and observables. On this page, you'll find a list of operations the TheHive node supports and links to more resources. TheHive and TheHive 5 n8n provides two nodes for TheHive. Use this node (TheHive 5) if you want to use TheHive's version 5 API. If you want to use version 3 or 4, use [TheHive](../n8n-nodes-base.thehive/). Credentials Refer to [TheHive credentials](../../credentials/thehive5/) for guidance on setting up authentication. ## Operations - Alert - Create - Delete - Execute Responder - Get - Merge Into Case - Promote to Case - Search - Update - Update Status - Case - Add Attachment - Create - Delete Attachment - Delete Case - Execute Responder - Get - Get Attachment - Get Timeline - Search - Update - Comment - Create - Delete - Search - Update - Observable - Create - Delete - Execute Analyzer - Execute Responder - Get - Search - Update - Page - Create - Delete - Search - Update - Query - Execute Query - Task - Create - Delete - Execute Responder - Get - Search - Update - Task Log - Add Attachment - Create - Delete - Delete Attachment - Execute Responder - Get - Search ## Templates and examples [Browse TheHive 5 integration templates](https://n8n.io/integrations/thehive-5/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for TheHive. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.thehive5trigger/). Refer to TheHive's [documentation](https://docs.strangebee.com/) for more information about the service. # TimescaleDB node Use the TimescaleDB node to automate work in TimescaleDB, and integrate TimescaleDB with other applications. n8n has built-in support for a wide range of TimescaleDB features, including executing an SQL query, as well as inserting and updating rows in a database. On this page, you'll find a list of operations the TimescaleDB node supports and links to more resources. Credentials Refer to [TimescaleDB credentials](../../credentials/timescaledb/) for guidance on setting up authentication. ## Operations - Execute an SQL query - Insert rows in database - Update rows in database ## Templates and examples [Browse TimescaleDB integration templates](https://n8n.io/integrations/timescaledb/), or [search all templates](https://n8n.io/workflows/) ## Specify a column's data type To specify a column's data type, append the column name with `:type`, where `type` is the data type you want for the column. For example, if you want to specify the type `int` for the column **id** and type `text` for the column **name**, you can use the following snippet in the **Columns** field: `id:int,name:text`. # Todoist node Use the Todoist node to automate work in Todoist, and integrate Todoist with other applications. n8n has built-in support for a wide range of Todoist features, including creating, updating, deleting, and getting tasks. On this page, you'll find a list of operations the Todoist node supports and links to more resources. Credentials Refer to [Todoist credentials](../../credentials/todoist/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Task - Create a new task - Close a task - Delete a task - Get a task - Get all tasks - Reopen a task - Update a task ## Templates and examples **Realtime Notion Todoist 2-way sync with Redis** by Mario [View template details](https://n8n.io/workflows/2772-realtime-notion-todoist-2-way-sync-with-redis/) **Sync tasks automatically from Todoist to Notion** by n8n Team [View template details](https://n8n.io/workflows/1778-sync-tasks-automatically-from-todoist-to-notion/) **Effortless Task Management: Create Todoist Tasks Directly from Telegram with AI** by Onur [View template details](https://n8n.io/workflows/3052-effortless-task-management-create-todoist-tasks-directly-from-telegram-with-ai/) [Browse Todoist integration templates](https://n8n.io/integrations/todoist/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Travis CI node Use the Travis CI node to automate work in Travis CI, and integrate Travis CI with other applications. n8n has built-in support for a wide range of Travis CI features, including cancelling and getting builds. On this page, you'll find a list of operations the Travis CI node supports and links to more resources. Credentials Refer to [Travis CI credentials](../../credentials/travisci/) for guidance on setting up authentication. ## Operations - Build - Cancel a build - Get a build - Get all builds - Restart a build - Trigger a build ## Templates and examples [Browse Travis CI integration templates](https://n8n.io/integrations/travisci/), or [search all templates](https://n8n.io/workflows/) # Trello node Use the Trello node to automate work in Trello, and integrate Trello with other applications. n8n has built-in support for a wide range of Trello features, including creating and updating cards, and adding and removing members. On this page, you'll find a list of operations the Trello node supports and links to more resources. Credentials Refer to [Trello credentials](../../credentials/trello/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Attachment - Create a new attachment for a card - Delete an attachment - Get the data of an attachment - Returns all attachments for the card - Board - Create a new board - Delete a board - Get the data of a board - Update a board - Board Member - Add - Get All - Invite - Remove - Card - Create a new card - Delete a card - Get the data of a card - Update a card - Card Comment - Create a comment on a card - Delete a comment from a card - Update a comment on a card - Checklist - Create a checklist item - Create a new checklist - Delete a checklist - Delete a checklist item - Get the data of a checklist - Returns all checklists for the card - Get a specific checklist on a card - Get the completed checklist items on a card - Update an item in a checklist on a card - Label - Add a label to a card. - Create a new label - Delete a label - Get the data of a label - Returns all labels for the board - Remove a label from a card. - Update a label. - List - Archive/Unarchive a list - Create a new list - Get the data of a list - Get all the lists - Get all the cards in a list - Update a list ## Templates and examples **RSS Feed News Processing and Distribution Workflow** by PollupAI [View template details](https://n8n.io/workflows/2785-rss-feed-news-processing-and-distribution-workflow/) **Process Shopify new orders with Zoho CRM and Harvest** by Lorena [View template details](https://n8n.io/workflows/1206-process-shopify-new-orders-with-zoho-crm-and-harvest/) **Sync Google Calendar tasks to Trello every day** by Angel Menendez [View template details](https://n8n.io/workflows/1118-sync-google-calendar-tasks-to-trello-every-day/) [Browse Trello integration templates](https://n8n.io/integrations/trello/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Find the List ID 1. Open the Trello board that contains the list. 1. If the list doesn't have any cards, add a card to the list. 1. Open the card, add `.json` at the end of the URL, and press enter. 1. In the JSON file, you will see a field called `idList`. 1. Copy the contents of the `idList`field and paste it in the **\*List ID** field in n8n. # Twake node Use the Twake node to automate work in Twake, and integrate Twake with other applications. n8n supports sending messages with Twake. On this page, you'll find a list of operations the Twake node supports and links to more resources. Credentials Refer to [Twake credentials](../../credentials/twake/) for guidance on setting up authentication. ## Operations - Message - Send a message ## Templates and examples [Browse Twake integration templates](https://n8n.io/integrations/twake/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Twilio node Use the Twilio node to automate work in Twilio, and integrate Twilio with other applications. n8n supports sending MMS/SMS and WhatsApp messages with Twilio. On this page, you'll find a list of operations the Twilio node supports and links to more resources. Credentials Refer to [Twilio credentials](../../credentials/twilio/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - SMS - Send SMS/MMS/WhatsApp message - Call - Make a phone call using text-to-speech to say a message ## Templates and examples **Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI** by Jimleuk [View template details](https://n8n.io/workflows/2342-handling-appointment-leads-and-follow-up-with-twilio-calcom-and-ai/) **Automate Lead Qualification with RetellAI Phone Agent, OpenAI GPT & Google Sheet** by Dr. Firas [View template details](https://n8n.io/workflows/3912-automate-lead-qualification-with-retellai-phone-agent-openai-gpt-and-google-sheet/) **Enhance Customer Chat by Buffering Messages with Twilio and Redis** by Jimleuk [View template details](https://n8n.io/workflows/2346-enhance-customer-chat-by-buffering-messages-with-twilio-and-redis/) [Browse Twilio integration templates](https://n8n.io/integrations/twilio/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Twilio's documentation](https://www.twilio.com/docs/usage/api) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Twist node Use the Twist node to automate work in Twist, and integrate Twist with other applications. n8n has built-in support for a wide range of Twist features, including creating conversations in a channel, as well as creating and deleting comments on a thread. On this page, you'll find a list of operations the Twist node supports and links to more resources. Credentials Refer to [Twist credentials](../../credentials/twist/) for guidance on setting up authentication. ## Operations - Channel - Archive a channel - Initiates a public or private channel-based conversation - Delete a channel - Get information about a channel - Get all channels - Unarchive a channel - Update a channel - Comment - Create a new comment to a thread - Delete a comment - Get information about a comment - Get all comments - Update a comment - Message Conversation - Create a message in a conversation - Delete a message in a conversation - Get a message in a conversation - Get all messages in a conversation - Update a message in a conversation - Thread - Create a new thread in a channel - Delete a thread - Get information about a thread - Get all threads - Update a thread ## Templates and examples [Browse Twist integration templates](https://n8n.io/integrations/twist/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Get the User ID To get the User ID for a user: 1. Open the **Team** tab. 1. Select a user's avatar. 1. Copy the string of characters located after `/u/` in your Twist URL. This string is the User ID. For example, if the URL is `https://twist.com/a/4qw45/people/u/475370` the User ID is `475370`. # X (Formerly Twitter) node Use the X node to automate work in X and integrate X with other applications. n8n has built-in support for a wide range of X features, including creating direct messages and deleting, searching, liking, and retweeting a tweet. On this page, you'll find a list of operations the X node supports and links to more resources. Credentials Refer to [X credentials](../../credentials/twitter/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Direct Message - Create a direct message - Tweet - Create or reply a tweet - Delete a tweet - Search tweets - Like a tweet - Retweet a tweet - User - Get a user - List - Add a member to a list ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) **✨🩷Automated Social Media Content Publishing Factory + System Prompt Composition** by Joseph LePage [View template details](https://n8n.io/workflows/3135-automated-social-media-content-publishing-factory-system-prompt-composition/) [Browse X (Formerly Twitter) integration templates](https://n8n.io/integrations/twitter/), or [search all templates](https://n8n.io/workflows/) # Unleashed Software node Use the Unleashed Software node to automate work in Unleashed Software, and integrate Unleashed Software with other applications. n8n has built-in support for a wide range of Unleashed Software features, including getting sales orders and stock on hand. On this page, you'll find a list of operations the Unleashed Software node supports and links to more resources. Credentials Refer to [Unleashed Software credentials](../../credentials/unleashedsoftware/) for guidance on setting up authentication. ## Operations - Sales Order - Get all sales orders - Stock On Hand - Get a stock on hand - Get all stocks on hand ## Templates and examples [Browse Unleashed Software integration templates](https://n8n.io/integrations/unleashed-software/), or [search all templates](https://n8n.io/workflows/) # UpLead node Use the UpLead node to automate work in UpLead, and integrate UpLead with other applications. n8n supports several UpLead operations, including getting company information. On this page, you'll find a list of operations the UpLead node supports and links to more resources. Credentials Refer to [UpLead credentials](../../credentials/uplead/) for guidance on setting up authentication. ## Operations - Company - Enrich - Person - Enrich ## Templates and examples [Browse UpLead integration templates](https://n8n.io/integrations/uplead/), or [search all templates](https://n8n.io/workflows/) # uProc node Use the uProc node to automate work in uProc, and integrate uProc with other applications. n8n has built-in support for a wide range of uProc features, including getting advanced human audio file, communication data, company, finance and product information. On this page, you'll find a list of operations the uProc node supports and links to more resources. Credentials Refer to [uProc credentials](../../credentials/uproc/) for guidance on setting up authentication. ## Operations ### Audio - Get advanced human audio file by provided text and language - Get an audio file by provided text and language ### Communication - Discover if a domain has a social network presence - Discover if an email is valid, hard bounce, soft bounce, spam-trap, free, temporary, and recipient exists - Discover if the email recipient exists, returning email status - Check if an email domain has an SMTP server to receive emails - Discover if the email has a social network presence - Check if an email has a valid format - Check if an email domain belongs to a disposable email service - Check if email belongs to free service provider like Gmail - Check if email is catchall - Discover if an email exists in the Robinson list (only Spain) - Check if email belongs to a system or role-based account - Check if an email is a spam trap - Discover if an IMEI number has a valid format - Check if a LinkedIn profile is a first-degree contact - Discover if mobile phone number exists in network operator, with worldwide coverage - Discover if a mobile phone number has a valid format with worldwide coverage - Discover if a mobile phone number has a valid format (only Spain) - Discover if a mobile phone number has a valid prefix, with worldwide coverage - Discover if a Spanish mobile phone number has a valid prefix - Discover if a mobile number is switched on to call it later, with worldwide coverage - Discover if a mobile number can receive SMS with worldwide coverage - Discover if a phone (landline or mobile) exists in a Robinson list (only Spain) - Discover if a landline or mobile number has a valid prefix - Discover if a landline phone number is valid, with Spain coverage - Allows discovering if landline number has a good international format, depending on the country - Discover if a landline phone number prefix exists, with worldwide coverage - Clean a phone removing non allowed characters - Allows getting country code of a mobile phone number with international format - Allows getting a domain from an email - Discover an email by company website or domain and prospect's first-name and last-name - Check if an email is personal or generic - Get emails list found on the internet by domain or URI - Get an emails list found on the internet by non-free email - Get emails list found inside the website by domain or URI - Get three first web references of an email published on the internet - Allows you to fix the email domain of those misspelled emails - Fix the international prefix of a phone based on the ISO code of a country - Get GDPR compliant emails list by domain for your Email Marketing campaigns in Europe - Discover if mobile exist using real-time HLR query - Get personal email by social network profile - Get portability data about a landline or mobile number, only for Spain - Extract results from a LinkedIn search (employees in a company) - Get members in a LinkedIn group - Get 'Search LinkedIn Contacts' URL - Extract the last 80 connections from your LinkedIn profile - Extract the last 80 invitations sent from your LinkedIn - Get users who comment on a post on LinkedIn - Get users who like a post on LinkedIn - Extract a LinkedIn profile - Extract results from a LinkedIn search (profiles) - Extract last profiles that have published content on LinkedIn by specific keywords - Discover if mobile exist using real-time HLR query, as well as portability and roaming data - Get existence, portability, and roaming of a mobile phone using MNP query - Discover if mobile or landline prefix exists in Spain - Allows normalizing email address, removing non allowed characters - Allows normalizing a mobile phone, removing non-allowed characters - Parse phone number in multiple fields and verify format and prefix validity - Allows getting country prefix number by country code - Discover an email by company website or domain and prospect's first-name and last-name - This tool parses a social URI address and extracts any available indicators - Search all social networks by domain, parses all found URLs, and returns social networks data - Discover if a domain or a website has social activity and returns all social network profiles found - Discover if an email has social activity, and get all social network profiles found - Discover if a mobile phone has social activity, and get all social network profiles found - Get web references for an email published on the internet - Send a custom message invitation to a non connected LinkedIn profile - Send a custom email to a recipient - Send a custom SMS to a recipient with worldwide coverage - Send a custom invitation message if a profile is connected or a custom message otherwise - Visits a profile to show interest and get profile views in return from contact, increasing your LinkedIn network - Send a custom private message to a connected LinkedIn profile - Get an email by contact's LinkedIn profile URI - Discover an email by company's name and prospect's full name - Discover an email by company's website or domain and prospect's full name - Get email by first name, last name, and company - Get parsed and validated phone ### Company - Discover if a CIF card number is valid - Check if a company is a debtor by TaxID - Check if the ISIN number is valid - Check if the SS number is valid, only for Spain - Identify and classify a prospecting role in detecting the right area and seniority to filter later - Get a company's contact, social, and technology data by domain - Get a company's contact, social, and technology data by email - Get a company's data by CIF - Get a company's data by DUNS - Get a company's data by domain - Get a company's data by email - Get a company's data by IP address - Get a company's data by name - Get a company's data by phone number - Get a company's data by social networks URI (LinkedIn, Twitter) - Get a company's name by company domain - Get professional data of a decision-maker by company name/domain and area - Discover more suitable decision-maker using search engines (Bing) by company name and area - Get professional emails of decision-makers by company domain and area - Discover up to ten decision-makers using search engines (Bing) by company name and area - Get a company's domain by company name - Get employees by company name or domain, area, seniority, and country - Get a company's Facebook profile by name without manually searching on Google or Facebook - Get geocoded company data by IP address - Get a company's LinkedIn profile by name without manually searching on Google or LinkedIn - Allows normalizing a CIF number, removing non-allowed characters - Get a company's phone by company domain - Get a company's sales data by a company's DUNS number - Get a company's sales data by a company's domain name - Get a company's sales data by a company's name - Get a company's sales data by a company's tax ID (CIF) - Get a company's Twitter profile by name without manually searching on Google or Twitter - Get decision maker by search engine - Get decision makers by search engine - Get Facebook URI by company's domain - Get GitHub URI by company's domain - Get Instagram URI by company's domain - Get LinkedIn URI by company's domain - Get Pinterest URI by company's domain - Get Twitter URI by company's domain - Get YouTube URI by company's domain ### Finance - Check if crypto wallet is valid - Discover if a BIC number has a valid format - Discover if an account number has a valid format - Check if credit card number checksum is valid - Discover if an IBAN account number has a valid format - Discover if an ISO currency code is valid - Check if a TIN exists in Europe - Convert amount between supported currencies and an exchange date - Get credit card type - Get multiple ISO currency codes by a country name - Get all ISO currency by an IP address - Get multiple ISO currency codes by a country ISO code - Get ISO currency code by IP address - Get ISO currency code by a currency ISO code - Get ISO currency code by an ISO country code - Get ISO currency code by a country name - Get related European TIN in Europe - Get IBAN by account number of the country - Get to search data bank information by IBAN account number - Get country VAT by address - Get country VAT by coordinates - Get Swift code lookup - Get VAT by IP address - Get VAT value by country ISO code - Get VAT by phone number, with worldwide coverage - Get VAT by zip code ### Geographical - Check if a country's ISO code exists - Discover if the distance between two coordinates is equal to another - Discover if the distance (kilometers) between two coordinates is greater than the given input - Discover if the distance (kilometers) between two coordinates is greater or equal to the given input - Discover if the distance(kilometers) between two coordinates is lower than the given input - Check if an address exists by a partial address search - Check if a house number exists by a partial address search - Check if coordinates have a valid format - Discover if a zip code number prefix exists (only for Spain) - Discover if a zip code number has a valid format (only for Spain) - Get cartesian coordinates(X, Y, Z/WGS84) by Latitude and Longitude - Get location by parameters - Get multiple cities by phone prefix (only for Spain) - Get multiple cities by partial initial text (only for Spain) - Get multiple cities by zip code prefix (only for Spain) - Get a city from IP - City search by partial name (only for Spain) - Discover the city name by a local phone number (only for Spain) - Discover the city name by the zip code (only for Spain) - Discover the community name from a zip code (only for Spain) - Discover latitude and longitude coordinates of an IP address - Discover latitude and longitude coordinates of a postal address - Get multiple country names by currency ISO code - Get multiple countries by ISO code - Get multiple country names by initial name - Get country name by currency ISO code - Get country name by IP address - Get country name by its ISO code - Get country by a prefix - Get country name by phone number, with worldwide coverage - Get Aplha2 code by a country prefix or a name - Get decimal coordinates (degrees, minutes, and seconds) by latitude and longitude - Returns straight-line distance (kilometers) between two addresses - Returns straight-line distance (kilometers) between two GPS coordinates (latitude and longitude) - Returns straight-line distance (kilometers) between two IP addresses - Returns straight-line distance (kilometers) between two landline phones, using city and province of every phone - Returns straight-line distance (kilometers) between two zip codes, using city and province of every zip code - Get an exact address by a partial address search - Discover geographical, company, timezone, and reputation data by IPv4 address - Discover the city name, zip code, province, country, latitude, and longitude from an IPv4 or IPv6 address and geocodes it - Parse postal address into separated fields, getting an improved resolution - Discover locale data (currency, language) by IPv4 or IPv6 address - Discover the city name, zip code, province, or country by latitude and longitude - Discover the city name, zip code, province, country, latitude, and longitude from an IPv4 or IPv6 address - Discover the city and the province from a landline phone number (only Spain) - Discover location data by name - Discover the city and the province from a zip code number (only Spain) - Get the most relevant locations by name - Get the most relevant locations by name, category, location, and radius - Get multiple personal names by a prefix - Discover network data by IPv4 or IPv6 address - Allow normalizing an address by removing non allowed characters - Allow normalizing a city by removing non allowed characters - Allow normalizing a country by removing non allowed characters - Allow normalizing a province by removing non allowed characters - Allow normalizing a zip code by removing non allowed characters - Get normalized country - Parse postal address into separated fields, getting a basic resolution - Discover the province name from an IP address - Get the first province by a name prefix (only for Spain) - Discover the province name from a landline phone number (only for Spain) - Discover the province name from a zip code number (only for Spain) - Get a province list by a name prefix (only for Spain) - Get a province list by a phone prefix (only for Spain) - Get a province list by a zip code prefix (only for Spain) - Discover reputation by IPv4 or IPv6 address - Returns driving routing time, distance, fuel consumption, and cost between two addresses - Returns driving routing time, distance, fuel consumption, and cost between two GPS coordinates - Returns driving routing time, distance, fuel consumption, and cost between two IP addresses - Returns driving routing time, distance, fuel consumption, and cost between two landline phones, using city and province of every phone (only for Spain) - Returns driving routing time, distance, fuel consumption, and cost between two zip codes, using city and province of every zip code - Discover date-time data by IPv4 or IPv6 address - Get USNG coordinates by latitude and longitude - Get UTM coordinates by latitude and longitude - Discover the zip code if you have an IP address - Get the first zip code by prefix, only for Spain - Get multiple zip codes by prefix, with worldwide coverage - Get time data by coordinates - Get time data by postal address ### Image - Get QR code decoded content by an image URL - It allows discovering all geographical and technical EXIF metadata present in a photographic JPEG image - Get an encoded barcode by number and a required standard - Get QR code encoded by a text - Generate a new image by URL and text - Discover logo (favicon) used in a domain - Generate a screenshot by URL provided using Chrome browser - Get OCR text from image ### Internet - Check if a domain exists - Check if a domain has a DNS record - Check if a domain has the given IP address assigned - Check if a domain has an MX record - Check if a domain has a valid SSL certificate - Check if a domain has a valid format - Check if a domain accepts all emails, existing or not - Check if a domain is a free service domain provider - Check if a domain is temporary or not - Discover if a computer is switched on - Discover if service in a port is available - Check if an URL contains a string or regular expression - Check if an URL exists - Check that an URL has a valid format - Get full SSL certificate data by a domain (or website) and monitor your certificate status - Get feed entries by domain - Get last feed entry by domain - Get text data from web, PDF or image allowing to filter some elements by regular expressions or field names - Decode URL to recover original - Get valid, existing, and default URL when accessing a domain using a web browser - Get long version of shortened URL - Discover device features by a user agent - Get the network name of and IP address - Get the domain record by its type - Encode URL to avoid problems - Copy file from one URL to another URL - Fix an IP address to the right format - Get the IPv4 address linked with a domain - Convert a number to an IP address - Get ISP known name of email domain name - Convert an IP address to numeric notation - Scan a host and returns the most commonly open ports - Obtains a list with multiple results from a website - Obtains the content of a website - Decode URL into multiple fields - Generate a PDF file by URL (provided using Chrome browser) - Get the root domain of any web address, removing non needed characters - Generates shareable URIs to use on social networks and email using a content URI and a text - Get data from the existing table in an HTML page or a PDF file - Discover client and server technologies used in a domain - Discover client and server technologies used in web pages - Analyze URL's health status about SSL, broken links, conflictive HTTP links with SSL, and more - Get website visits and rank of any domain - Get a domain's WHOIS data by fields - Get WHOIS data fields by IP address provided ### Personal - Check if age is between two numbers - Check if date returns an age between 20 and 29 - Check if date returns an age between 40 and 49 - Check if age is greater than another - Check if birth date returns an age greater than 64 - Check if birth date belongs to an adult (18 years for Spain) - Check if age is lower than another - Check if age is lower or equal than another - Check if ages are equal - Discover if a date is between two dates - Discover if a date is greater - Discover if a date is greater or equal - Discover if a date belongs to a leap year - Discover if a date is lower - Discover if a date is lower or equal - Discover if a date has a valid format - Discover if a gender value is valid - Discover if an NIE card number is valid - Discover if a NIF card number is valid - Check if a personal name exists in the INE data source (only for Spain) - Check if a name contains accepted characters - Discover if a NIF exists in the Robinson list (only for Spain) - Check if surname contains accepted characters - Check if a personal surname appears in INE data source (only for Spain) - Discover if a DNI card number is valid - Discover the age of a birth date - Discover the age range of a person by birth date - Get the difference between two dates - Discover the gender of a person by the email - Discover the gender of a person or company by the name - Get LinkedIn employee profile URI by business email - Get LinkedIn employee profile URI by first name, last name, and company - Discover the letter of a DNI card number - Get first personal name matching by prefix and gender from INE data source (only for Spain) - Get LinkedIn URI by email - Get LinkedIn URI by phone - Allow normalizing a DNI number by removing non allowed characters - Allow normalizing an NIE number by removing non allowed characters - Normalize name by removing non allowed characters - Normalize surname - Get parsed date-time - Normalize full name, fixing abbreviations, sorting if necessary, and returning first name, last name, and gender - Get prospect's contact data and the company's location and social data by email - Get contact, location, and social data by email and company name and location - Get personal and social data by social profile - Get personal data by email - Get personal data by first name, last name, company, and location - Get personal data by mobile - Get personal data by social network profile - Generate random fake data - Get first personal surname matching by prefix from INE data source (only for Spain) - Get personal surname matching by prefix from INE data source (only for Spain) - Get Twitter profile by first name, last name, and company - Get XING profile by first name, last name, and company - Add a contact email to a person list ### Product - Check if an ASIN code exists on the Amazon Marketplace - Check if an ASIN code has a valid format - Check if an EAN code exists on Amazon Marketplace - Check if an EAN barcode has a valid format - Check if an EAN barcode of 13 digits has a valid format - Check if an EAN barcode of 14 digits has a valid format - Check if an EAN barcode of 18 digits has a valid format - Check if an EAN barcode of 8 digits has a valid format - Check if a GTIN barcode has a valid format - Check if a GTIN barcode of 13 digits has a valid format - Check if a GTIN barcode of 14 digits has a valid format - Check if a GTIN barcode of 8 digits has a valid format - Check if VIN Number is valid - Allows checking if an ISBN book exists - Allows checking if an ISBN10/13 code has a valid format - Allows checking if an ISBN10 code has a valid format - Allows checking if an ISBN13 code has a valid format - Check if a UPC exists - Check if a UPC has a valid format - Get ASIN by EAN - Get a book by author's surname - Get all publications by category - Get book data by an editor's name - Get book or publication data by 10 or 13 digits ISBN code - Get book data by title - Get books by author's surname - Get all books by category - Get all books by editor - Get all books by title - Get EAN code by ASIN code - Get product data on a UPC on Amazon Marketplace - Get ISBN10 code by ISBN13 code - Get ISBN13 code by ISBN10 code - Get data By VIN number ### Security - Check if a Luhn number is valid - Check if a password is strong - Check if a UUID number is valid - Get blacklists for a domain - Get blacklists for an IP address ### Text - Check if a string only contains alphabets - Check if a string is alphanumeric - Check if a string is boolean - Check if the largest item in a list matches the provided item - Check if IPv4 or IPv6 address has a valid format - Check if IPv4 address has a valid format - Check if IPv6 address has a valid format - Check if the length of a list is between two quantities - Checks if the length of a list equals a specified quantity - Checks if the length of a list is greater than or equal to a certain amount - Check if the length of a list is lower than a certain amount - Check if the list contains a specific item - Check if the list ends with a specific element - Check if a list is sorted in ascending order - Check if the list starts with a specific element - Checks if the smallest element in a list matches the provided element - Check if a string contains only numbers - Check if a string contains a character - Check if a string ends with a character - Check if a string has no content - Check if a string contains random characters - Check if a string contains a value that matches with a regular expression - Check if the length of a string is between two numbers - Check if the length of a string is equal to a number - Check if the length of a string is greater than a number - Check if the length of a string is greater or equal to a number - Check if the length of a string is lower than a number - Check if the length of a string is lower or equal to a number - Check if a string starts with a character - Check if a string contains only lowercase characters - Check if a string contains only uppercase characters - Check if a list consists of unique elements - Check if the supplied values form a valid list of elements - Check if the number of words in a sentence is between two determined quantities - Check if the number of words in a sentence equals a certain amount - Check if the number of words in a sentence is greater than a certain amount - Check if the number of words in a sentence is greater than - Check if the word count is lower - Check if the number of words present in a sentence is less than or equal to a quantity - Convert a string to Base64 encoded value - Discover banned English words in an email body or subject - Get field names by analyzing the field value provided - Get HTML code from Markdown - Get Markdown text from HTML - Get text without HTML - Get spin string - Format a string using a format pattern - Generate random string using a regular expression as a pattern - Return the largest item in a list - Return the smallest item in a list - Convert to lowercase - Convert a string to MD5 encoded value - Merge two strings - Normalize a string depending on the field name - Analyze string and return all emails, phones, zip codes, and links - Convert a string to an SHA encoded value - Analyze an English text with emojis and detect sentiment - Returns an ascending sorted list - Split a value into two parts and join them using a separator from the original string - Split a value into two parts using a separator from the original string - Get the length of a string - Lookup string between multiple values by fuzzy logic and regex patterns - Clean abuse words from a string - Replace the first value found in a string with another - Replace all values found in a string with another - Translate a text into any language - Return a single list with no repeating elements - Convert all letters to uppercase - Count total words in a text ## Templates and examples **Scrape and store data from multiple website pages** by Miquel Colomer [View template details](https://n8n.io/workflows/1073-scrape-and-store-data-from-multiple-website-pages/) **Create a website screenshot and send via Telegram Channel** by Harshil Agrawal [View template details](https://n8n.io/workflows/858-create-a-website-screenshot-and-send-via-telegram-channel/) **Monitor SSL certificate of any domain with uProc** by Miquel Colomer [View template details](https://n8n.io/workflows/861-monitor-ssl-certificate-of-any-domain-with-uproc/) [Browse uProc integration templates](https://n8n.io/integrations/uproc/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # UptimeRobot node Use the UptimeRobot node to automate work in UptimeRobot, and integrate UptimeRobot with other applications. n8n has built-in support for a wide range of UptimeRobot features, including creating and deleting alerts, as well as getting account details. On this page, you'll find a list of operations the UptimeRobot node supports and links to more resources. Credentials Refer to [UptimeRobot credentials](../../credentials/uptimerobot/) for guidance on setting up authentication. ## Operations - Account - Get account details - Alert Contact - Create an alert contact - Delete an alert contact - Get an alert contact - Get all alert contacts - Update an alert contact - Maintenance Window - Create a maintenance window - Delete a maintenance window - Get a maintenance window - Get all a maintenance windows - Update a maintenance window - Monitor - Create a monitor - Delete a monitor - Get a monitor - Get all monitors - Reset a monitor - Update a monitor - Public Status Page - Create a public status page - Delete a public status page - Get a public status page - Get all a public status pages ## Templates and examples **Create, update, and get a monitor using UptimeRobot** by Harshil Agrawal [View template details](https://n8n.io/workflows/1112-create-update-and-get-a-monitor-using-uptimerobot/) **Website Downtime Alert via LINE + Supabase Log** by sayamol thiramonpaphakul [View template details](https://n8n.io/workflows/4379-website-downtime-alert-via-line-supabase-log/) **Create, Update Alerts 🛠️ UptimeRobot Tool MCP Server 💪 all 21 operations** by David Ashby [View template details](https://n8n.io/workflows/5075-create-update-alerts-uptimerobot-tool-mcp-server-all-21-operations/) [Browse UptimeRobot integration templates](https://n8n.io/integrations/uptimerobot/), or [search all templates](https://n8n.io/workflows/) # urlscan.io node Use the urlscan.io node to automate work in urlscan.io, and integrate urlscan.io with other applications. n8n has built-in support for a wide range of urlscan.io features, including getting and performing scans. On this page, you'll find a list of operations the urlscan.io node supports and links to more resources. Credentials Refer to [urlscan.io credentials](../../credentials/urlscanio/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Scan - Get - Get All - Perform ## Templates and examples **Phishing Analysis - URLScan.io and VirusTotal** by n8n Team [View template details](https://n8n.io/workflows/1992-phishing-analysis-urlscanio-and-virustotal/) **Scan URLs with urlscan.io and Send Results via Gmail** by Calistus Christian [View template details](https://n8n.io/workflows/6946-scan-urls-with-urlscanio-and-send-results-via-gmail/) **Perform, Get Scans 🛠️ urlscan.io Tool MCP Server 💪 all 3 operations** by David Ashby [View template details](https://n8n.io/workflows/5065-perform-get-scans-urlscanio-tool-mcp-server-all-3-operations/) [Browse urlscan.io integration templates](https://n8n.io/integrations/urlscanio/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Venafi TLS Protect Cloud node Use the Venafi TLS Protect Cloud node to automate work in Venafi TLS Protect Cloud, and integrate Venafi TLS Protect Cloud with other applications. n8n has built-in support for a wide range of Venafi TLS Protect Cloud features, including deleting and downloading certificates, as well as creating certificates requests. On this page, you'll find a list of operations the Venafi TLS Protect Cloud node supports and links to more resources. Credentials Refer to [Venafi TLS Protect Cloud credentials](../../credentials/venafitlsprotectcloud/) for guidance on setting up authentication. ## Operations - Certificate - Delete - Download - Get - Get Many - Renew - Certificate Request - Create - Get - Get Many ## Templates and examples [Browse Venafi TLS Protect Cloud integration templates](https://n8n.io/integrations/venafi-tls-protect-cloud/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Venafi's REST API documentation](https://docs.venafi.cloud/api/vaas-rest-api/) for more information on this service. n8n also provides: - A [trigger node](../../trigger-nodes/n8n-nodes-base.venafitlsprotectcloudtrigger/) for Venafi TLS Protect Cloud. - A [node](../n8n-nodes-base.venafitlsprotectdatacenter/) for Venafi TLS Protect Datacenter. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Venafi TLS Protect Datacenter node Use the Venafi TLS Protect Datacenter node to automate work in Venafi TLS Protect Datacenter, and integrate Venafi TLS Protect Datacenter with other applications. n8n has built-in support for a wide range of Venafi TLS Protect Datacenter features, including creating, deleting, and getting certificates. On this page, you'll find a list of operations the Venafi TLS Protect Datacenter node supports and links to more resources. Credentials Refer to [Venafi TLS Protect Datacenter credentials](../../credentials/venafitlsprotectdatacenter/) for guidance on setting up authentication. ## Operations - Certificate - Create - Delete - Download - Get - Get Many - Renew - Policy - Get ## Templates and examples [Browse Venafi TLS Protect Datacenter integration templates](https://n8n.io/integrations/venafi-tls-protect-datacenter/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n also provides: - A [node](../n8n-nodes-base.venafitlsprotectcloud/) and [trigger](../../trigger-nodes/n8n-nodes-base.venafitlsprotectcloudtrigger/) node for Venafi TLS Protect Cloud. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Vero node Use the Vero node to automate work in Vero and integrate Vero with other applications. n8n has built-in support for a wide range of Vero features, including creating and deleting users. On this page, you'll find a list of operations the Vero node supports and links to more resources. Credentials Refer to [Vero credentials](../../credentials/vero/) for guidance on setting up authentication. ## Operations - User - Create or update a user profile - Change a users identifier - Unsubscribe a user. - Resubscribe a user. - Delete a user. - Adds a tag to a users profile. - Removes a tag from a users profile. - Event - Track an event for a specific customer ## Templates and examples [Browse Vero integration templates](https://n8n.io/integrations/vero/), or [search all templates](https://n8n.io/workflows/) # Vonage node Use the Vonage node to automate work in Vonage, and integrate Vonage with other applications. n8n supports sending SMS with Vonage. On this page, you'll find a list of operations the Vonage node supports and links to more resources. Credentials Refer to [Vonage credentials](../../credentials/vonage/) for guidance on setting up authentication. ## Operations - SMS - Send ## Templates and examples **Receive messages from a topic via Kafka and send an SMS** by Harshil Agrawal [View template details](https://n8n.io/workflows/814-receive-messages-from-a-topic-via-kafka-and-send-an-sms/) **Receive messages from a queue via RabbitMQ and send an SMS** by Harshil Agrawal [View template details](https://n8n.io/workflows/845-receive-messages-from-a-queue-via-rabbitmq-and-send-an-sms/) **Get data from Hacker News and send to Airtable or via SMS** by isa024787bel [View template details](https://n8n.io/workflows/888-get-data-from-hacker-news-and-send-to-airtable-or-via-sms/) [Browse Vonage integration templates](https://n8n.io/integrations/vonage/), or [search all templates](https://n8n.io/workflows/) # Webflow node Use the Webflow node to automate work in Webflow, and integrate Webflow with other applications. n8n has built-in support for a wide range of Webflow features, including creating, updating, deleting, and getting items. On this page, you'll find a list of operations the Webflow node supports and links to more resources. Credentials Refer to [Webflow credentials](../../credentials/webflow/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Item - Create - Delete - Get - Get All - Update ## Templates and examples **Enrich FAQ sections on your website pages at scale with AI** by Polina Medvedieva [View template details](https://n8n.io/workflows/2434-enrich-faq-sections-on-your-website-pages-at-scale-with-ai/) **Sync blog posts from Notion to Webflow** by Giovanni Ruggieri [View template details](https://n8n.io/workflows/2293-sync-blog-posts-from-notion-to-webflow/) **Real-time lead routing in Webflow** by Lucas Perret [View template details](https://n8n.io/workflows/2033-real-time-lead-routing-in-webflow/) [Browse Webflow integration templates](https://n8n.io/integrations/webflow/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Wekan node Use the Wekan node to automate work in Wekan, and integrate Wekan with other applications. n8n has built-in support for a wide range of Wekan features, including creating, updating, deleting, and getting boards and cards. On this page, you'll find a list of operations the Wekan node supports and links to more resources. Credentials Refer to [Wekan credentials](../../credentials/wekan/) for guidance on setting up authentication. ## Operations - Board - Create a new board - Delete a board - Get the data of a board - Get all user boards - Card - Create a new card - Delete a card - Get a card - Get all cards - Update a card - Card Comment - Create a comment on a card - Delete a comment from a card - Get a card comment - Get all card comments - Checklist - Create a new checklist - Delete a checklist - Get the data of a checklist - Returns all checklists for the card - Checklist Item - Delete a checklist item - Get a checklist item - Update a checklist item - List - Create a new list - Delete a list - Get the data of a list - Get all board lists ## Templates and examples [Browse Wekan integration templates](https://n8n.io/integrations/wekan/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Load all the parameters for the node To load all the parameters, for example, Author ID, you need to give admin permissions to the user. Refer to the [Wekan documentation](https://github.com/wekan/wekan/wiki/Features#members-click-member-initials-or-avatar--permissions-adminnormalcomment-only) to learn how to change permissions. # Wise node Use the Wise node to automate work in Wise, and integrate Wise with other applications. n8n has built-in support for a wide range of Wise features, including getting profiles, exchange rates, and recipients. On this page, you'll find a list of operations the Wise node supports and links to more resources. Credentials Refer to [Wise credentials](../../credentials/wise/) for guidance on setting up authentication. ## Operations - Account - Retrieve balances for all account currencies of this user. - Retrieve currencies in the borderless account of this user. - Retrieve the statement for the borderless account of this user. - Exchange Rate - Get - Profile - Get - Get All - Recipient - Get All - Quote - Create - Get - Transfer - Create - Delete - Execute - Get - Get All ## Templates and examples [Browse Wise integration templates](https://n8n.io/integrations/wise/), or [search all templates](https://n8n.io/workflows/) # WooCommerce node Use the WooCommerce node to automate work in WooCommerce, and integrate WooCommerce with other applications. n8n has built-in support for a wide range of WooCommerce features, including creating and deleting customers, orders, and products. On this page, you'll find a list of operations the WooCommerce node supports and links to more resources. Credentials Refer to [WooCommerce credentials](../../credentials/woocommerce/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Customer - Create a customer - Delete a customer - Retrieve a customer - Retrieve all customers - Update a customer - Order - Create a order - Delete a order - Get a order - Get all orders - Update an order - Product - Create a product - Delete a product - Get a product - Get all products - Update a product ## Templates and examples **AI-powered WooCommerce Support-Agent** by Jan Oberhauser [View template details](https://n8n.io/workflows/2161-ai-powered-woocommerce-support-agent/) **Personal Shopper Chatbot for WooCommerce with RAG using Google Drive and openAI** by Davide Boizza [View template details](https://n8n.io/workflows/2784-personal-shopper-chatbot-for-woocommerce-with-rag-using-google-drive-and-openai/) **Create, update and get a product from WooCommerce** by Harshil Agrawal [View template details](https://n8n.io/workflows/847-create-update-and-get-a-product-from-woocommerce/) [Browse WooCommerce integration templates](https://n8n.io/integrations/woocommerce/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # WordPress node Use the WordPress node to automate work in WordPress, and integrate WordPress with other applications. n8n has built-in support for a wide range of WordPress features, including creating, updating, and getting posts and users. On this page, you'll find a list of operations the WordPress node supports and links to more resources. Credentials Refer to [WordPress credentials](../../credentials/wordpress/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Post - Create a post - Get a post - Get all posts - Update a post - Pages - Create a page - Get a page - Get all pages - Update a page - User - Create a user - Get a user - Get all users - Update a user ## Templates and examples **Write a WordPress post with AI (starting from a few keywords)** by Giulio [View template details](https://n8n.io/workflows/2187-write-a-wordpress-post-with-ai-starting-from-a-few-keywords/) **🔍🛠️Generate SEO-Optimized WordPress Content with AI Powered Perplexity Research** by Joseph LePage [View template details](https://n8n.io/workflows/3291-generate-seo-optimized-wordpress-content-with-ai-powered-perplexity-research/) **Automate Content Generator for WordPress with DeepSeek R1** by Davide Boizza [View template details](https://n8n.io/workflows/2813-automate-content-generator-for-wordpress-with-deepseek-r1/) [Browse WordPress integration templates](https://n8n.io/integrations/wordpress/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Xero node Use the Xero node to automate work in Xero, and integrate Xero with other applications. n8n has built-in support for a wide range of Xero features, including creating, updating, and getting contacts and invoices. On this page, you'll find a list of operations the Xero node supports and links to more resources. Credentials Refer to [Xero credentials](../../credentials/xero/) for guidance on setting up authentication. ## Operations - Contact - Create a contact - Get a contact - Get all contacts - Update a contact - Invoice - Create a invoice - Get a invoice - Get all invoices - Update a invoice ## Templates and examples **Get invoices from Xero** by amudhan [View template details](https://n8n.io/workflows/543-get-invoices-from-xero/) **Integrate Xero with FileMaker using Webhooks** by Stathis Askaridis [View template details](https://n8n.io/workflows/2499-integrate-xero-with-filemaker-using-webhooks/) **Automate Invoice Processing with Gmail, OCR.space, Slack & Xero** by Abi Odedeyi [View template details](https://n8n.io/workflows/9905-automate-invoice-processing-with-gmail-ocrspace-slack-and-xero/) [Browse Xero integration templates](https://n8n.io/integrations/xero/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Xero's API documentation](https://developer.xero.com/documentation/api/accounting/overview) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Yourls node Use the Yourls node to automate work in Yourls, and integrate Yourls with other applications. n8n has built-in support for a wide range of Yourls features, including expanding and shortening URLs. On this page, you'll find a list of operations the Yourls node supports and links to more resources. Credentials Refer to [Yourls credentials](../../credentials/yourls/) for guidance on setting up authentication. ## Operations - URL - Expand a URL - Shorten a URL - Get stats about one short URL ## Templates and examples [Browse Yourls integration templates](https://n8n.io/integrations/yourls/), or [search all templates](https://n8n.io/workflows/) # YouTube node Use the YouTube node to automate work in YouTube, and integrate YouTube with other applications. n8n has built-in support for a wide range of YouTube features, including retrieving and updating channels, as well as creating and deleting playlists. On this page, you'll find a list of operations the YouTube node supports and links to more resources. Credentials Refer to [YouTube credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Channel - Retrieve a channel - Retrieve all channels - Update a channel - Upload a channel banner - Playlist - Create a playlist - Delete a playlist - Get a playlist - Retrieve all playlists - Update a playlist - Playlist Item - Add an item to a playlist - Delete a item from a playlist - Get a playlist's item - Retrieve all playlist items - Video - Delete a video - Get a video - Retrieve all videos - Rate a video - Update a video - Upload a video - Video Category - Retrieve all video categories ## Templates and examples **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube** by Davide Boizza [View template details](https://n8n.io/workflows/4846-generate-ai-videos-with-google-veo3-save-to-google-drive-and-upload-to-youtube/) **⚡AI-Powered YouTube Video Summarization & Analysis** by Joseph LePage [View template details](https://n8n.io/workflows/2679-ai-powered-youtube-video-summarization-and-analysis/) [Browse YouTube integration templates](https://n8n.io/integrations/youtube/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Zammad node Use the Zammad node to automate work in Zammad, and integrate Zammad with other applications. n8n has built-in support for a wide range of Zammad features, including creating, retrieving, and deleting groups and organizations. On this page, you'll find a list of operations the Zammad node supports and links to more resources. Credentials Refer to [Zammad credentials](../../credentials/zammad/) for guidance on setting up authentication. ## Operations - Group - Create - Delete - Get - Get many - Update - Organization - Create - Delete - Get - Get many - Update - Ticket - Create - Delete - Get - Get many - Update - User - Create - Delete - Get - Get many - Get self - Update ## Templates and examples **Update people through Zulip about open tickets in Zammad** by Ghazi Triki [View template details](https://n8n.io/workflows/1575-update-people-through-zulip-about-open-tickets-in-zammad/) **Export Zammad Objects (Users, Roles, Groups, Organizations) to Excel** by Sirhexalot [View template details](https://n8n.io/workflows/2596-export-zammad-objects-users-roles-groups-organizations-to-excel/) **Sync Entra User to Zammad User** by Sirhexalot [View template details](https://n8n.io/workflows/2587-sync-entra-user-to-zammad-user/) [Browse Zammad integration templates](https://n8n.io/integrations/zammad/), or [search all templates](https://n8n.io/workflows/) # Zendesk node Use the Zendesk node to automate work in Zendesk, and integrate Zendesk with other applications. n8n has built-in support for a wide range of Zendesk features, including creating, and deleting tickets, users, and organizations. On this page, you'll find a list of operations the Zendesk node supports and links to more resources. Credentials Refer to [Zendesk credentials](../../credentials/zendesk/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Ticket - Create a ticket - Delete a ticket - Get a ticket - Get all tickets - Recover a suspended ticket - Update a ticket - Ticket Field - Get a ticket field - Get all system and custom ticket fields - User - Create a user - Delete a user - Get a user - Get all users - Get a user's organizations - Get data related to the user - Search users - Update a user - Organization - Create an organization - Delete an organization - Count organizations - Get an organization - Get all organizations - Get data related to the organization - Update a organization Tag Replacement Behavior When using the Zendesk node's "Update Ticket" operation and specifying the `Tag Names or IDs` field, the entire list of tags on the ticket **will be replaced**. Any tags not included in the update will be removed from the ticket due to how the Zendesk API processes tag updates by default. **To avoid accidental tag removal:** - First retrieve the ticket's tags and merge them with your new tags before updating. - Alternatively, use the HTTP Request node with Zendesk's `additional_tags` property to add tags without removing existing ones. - You can also call the ticket's `/tags` endpoint to add tags without replacing existing ones ([Zendesk tags endpoint documentation](https://developer.zendesk.com/api-reference/ticketing/ticket-management/tags/)). See the official documentation for details: [Adding tags to tickets without overwriting existing tags](https://developer.zendesk.com/documentation/ticketing/managing-tickets/adding-tags-to-tickets-without-overwriting-existing-tags/). ## Templates and examples **Automate SIEM Alert Enrichment with MITRE ATT&CK, Qdrant & Zendesk in n8n** by Angel Menendez [View template details](https://n8n.io/workflows/2840-automate-siem-alert-enrichment-with-mitre-attandck-qdrant-and-zendesk-in-n8n/) **Sync Zendesk tickets with subsequent comments to Jira issues** by n8n Team [View template details](https://n8n.io/workflows/1833-sync-zendesk-tickets-with-subsequent-comments-to-jira-issues/) **Sync Zendesk tickets to Slack thread** by n8n Team [View template details](https://n8n.io/workflows/1820-sync-zendesk-tickets-to-slack-thread/) [Browse Zendesk integration templates](https://n8n.io/integrations/zendesk/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Zoho CRM node Use the Zoho CRM node to automate work in Zoho CRM, and integrate Zoho CRM with other applications. n8n has built-in support for a wide range of Zoho CRM features, including creating and deleting accounts, contacts, and deals. On this page, you'll find a list of operations the Zoho CRM node supports and links to more resources. Credentials Refer to [Zoho CRM credentials](../../credentials/zoho/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Account - Create an account - Create a new record, or update the current one if it already exists (upsert) - Delete an account - Get an account - Get all accounts - Update an account - Contact - Create a contact - Create a new record, or update the current one if it already exists (upsert) - Delete a contact - Get a contact - Get all contacts - Update a contact - Deal - Create a deal - Create a new record, or update the current one if it already exists (upsert) - Delete a contact - Get a contact - Get all contacts - Update a contact - Invoice - Create an invoice - Create a new record, or update the current one if it already exists (upsert) - Delete an invoice - Get an invoice - Get all invoices - Update an invoice - Lead - Create a lead - Create a new record, or update the current one if it already exists (upsert) - Delete a lead - Get a lead - Get all leads - Get lead fields - Update a lead - Product - Create a product - Create a new record, or update the current one if it already exists (upsert) - Delete a product - Get a product - Get all products - Update a product - Purchase Order - Create a purchase order - Create a new record, or update the current one if it already exists (upsert) - Delete a purchase order - Get a purchase order - Get all purchase orders - Update a purchase order - Quote - Create a quote - Create a new record, or update the current one if it already exists (upsert) - Delete a quote - Get a quote - Get all quotes - Update a quote - Sales Order - Create a sales order - Create a new record, or update the current one if it already exists (upsert) - Delete a sales order - Get a sales order - Get all sales orders - Update a sales order - Vendor - Create a vendor - Create a new record, or update the current one if it already exists (upsert) - Delete a vendor - Get a vendor - Get all vendors - Update a vendor ## Templates and examples **Process Shopify new orders with Zoho CRM and Harvest** by Lorena [View template details](https://n8n.io/workflows/1206-process-shopify-new-orders-with-zoho-crm-and-harvest/) **Get all leads from Zoho CRM** by amudhan [View template details](https://n8n.io/workflows/552-get-all-leads-from-zoho-crm/) **Sync contacts two-way between Zoho CRM and KlickTipp** by KlickTipp [View template details](https://n8n.io/workflows/12679-sync-contacts-two-way-between-zoho-crm-and-klicktipp/) [Browse Zoho CRM integration templates](https://n8n.io/integrations/zoho-crm/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Zoom node Use the Zoom node to automate work in Zoom, and integrate Zoom with other applications. n8n has built-in support for a wide range of Zoom features, including creating, retrieving, deleting, and updating meetings. On this page, you'll find a list of operations the Zoom node supports and links to more resources. Credentials Refer to [Zoom credentials](../../credentials/zoom/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Meeting - Create a meeting - Delete a meeting - Retrieve a meeting - Retrieve all meetings - Update a meeting ## Templates and examples **Zoom AI Meeting Assistant creates mail summary, ClickUp tasks and follow-up call** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2800-zoom-ai-meeting-assistant-creates-mail-summary-clickup-tasks-and-follow-up-call/) **Streamline Your Zoom Meetings with Secure, Automated Stripe Payments** by Emmanuel Bernard - n8n Expert Lausanne [View template details](https://n8n.io/workflows/2192-streamline-your-zoom-meetings-with-secure-automated-stripe-payments/) **Create Zoom meeting link from Google Calendar invite** by Jason Foster [View template details](https://n8n.io/workflows/1340-create-zoom-meeting-link-from-google-calendar-invite/) [Browse Zoom integration templates](https://n8n.io/integrations/zoom/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Zulip node Use the Zulip node to automate work in Zulip, and integrate Zulip with other applications. n8n has built-in support for a wide range of Zulip features, including creating, deleting, and getting users and streams, as well as sending messages. On this page, you'll find a list of operations the Zulip node supports and links to more resources. Credentials Refer to [Zulip credentials](../../credentials/zulip/) for guidance on setting up authentication. ## Operations - Message - Delete a message - Get a message - Send a private message - Send a message to stream - Update a message - Upload a file - Stream - Create a stream. - Delete a stream. - Get all streams. - Get subscribed streams. - Update a stream. - User - Create a user. - Deactivate a user. - Get a user. - Get all users. - Update a user. ## Templates and examples [Browse Zulip integration templates](https://n8n.io/integrations/zulip/), or [search all templates](https://n8n.io/workflows/) # Alibaba Cloud Model Studio node The Alibaba Cloud Model Studio node lets you call Alibaba Cloud Qwen models (text, vision, and media models) from n8n. Use it to generate completions, analyze or create images, and produce short videos from text or images. Credentials You can find authentication information for this node [here](../../credentials/alibaba/). ## Resources and operations - **Text**: Message a model to create text completions and agent-like responses. - **Image**: Analyze images with vision-language models or generate images from prompts. - **Video**: Generate short videos from text or from one or more images. ### Message a model Create a completion with a Qwen model. **Parameters** - **Model** (type: options, field: `modelId`): The model to use for generation (for example, Qwen3.5 Flash, Qwen3 Max). - **Messages** (type: fixedCollection, field: `messages`): One or more messages forming the conversation. - Message values: - **Content** (type: string, field: `content`): The content of the message. - **Role** (type: options, field: `role`): The role of the message sender (User or Assistant). - **Simplify Output** (type: boolean, field: `simplify`): Return a simplified version of the response instead of the full raw API output. **Options** - **Enable Search** (type: boolean, field: `enableSearch`): Enable web search for up-to-date information. - **Max Tokens** (type: number, field: `maxTokens`): Maximum number of tokens to generate. - **Max Tools Iterations** (type: number, field: `maxToolsIterations`): Maximum number of tool-calling iterations before stopping. Set to zero for unlimited. - **Repetition Penalty** (type: number, field: `repetitionPenalty`): Penalty for token repetition. Higher values reduce repetition. - **Seed** (type: number, field: `seed`): Random seed for reproducible outputs. - **Stop Sequences** (type: string, field: `stop`): Comma-separated list of sequences where the API will stop generating. - **System Message** (type: string, field: `system`): System instruction for the model. - **Temperature** (type: number, field: `temperature`): Controls randomness. Lower = more deterministic. - **Top K** (type: number, field: `topK`): Limits sampling pool to top K tokens. - **Top P** (type: number, field: `topP`): Nucleus sampling parameter. ### Analyze image Take images as input and ask vision-language questions about them. **Parameters** - **Model** (type: options, field: `modelId`): Vision-language model to use (for example, Qwen-VL Flash). - **Input Type** (type: options, field: `inputType`): How to provide the image (URL or binary data). - **Image URL** (type: string, field: `imageUrl`): The URL of the image to analyze (required when using URL input). - **Input Data Field Name** (type: string, field: `binaryPropertyName`): Binary field name to read the image from when using binary input. - **Question** (type: string, field: `question`): The question or instruction about the image. - **Simplify Output** (type: boolean, field: `simplify`): Return a simplified version of the response. **Options** - **Temperature** (type: number, field: `temperature`): Controls randomness for the vision model. - **Max Tokens** (type: number, field: `maxTokens`): Maximum number of tokens for the vision model output. ### Generate an image Create an image from a text prompt. **Parameters** - **Model** (type: options, field: `modelId`): Image-generation model to use (for example, Z-Image Turbo). - **Prompt** (type: string, field: `prompt`): The text prompt describing the image to generate. - **Download Image** (type: boolean, field: `downloadImage`): When true, download the generated image as binary data; otherwise only the URL is returned. **Options** - **Size** (type: options, field: `size`): The size of the generated image (for example, 1024*1024, 1664*928). - **Prompt Extend** (type: boolean, field: `promptExtend`): Automatically extend and enhance the prompt. ### Generate video from text Generate a short video from a text prompt. **Parameters** - **Model** (type: options, field: `modelId`): Text-to-video model to use (for example, Wan 2.6 Text-to-Video). - **Prompt** (type: string, field: `prompt`): The text prompt to generate the video from. - **Resolution** (type: options, field: `resolution`): Resolution tier (720P or 1080P). - **Duration (Seconds)** (type: number, field: `duration`): Duration of the generated video in seconds (2–15). - **Shot Type** (type: options, field: `shotType`): Single or Multi (multi-shot narrative). - **Download Video** (type: boolean, field: `downloadVideo`): When true, download the generated video as binary data; otherwise only the URL is returned. - **Simplify Output** (type: boolean, field: `simplify`): Return a simplified response. **Options** - **Prompt Extend** (type: boolean, field: `promptExtend`): Automatically extend and enhance the prompt. - **Audio** (type: boolean, field: `audio`): Whether to generate audio for the video. - **Audio Input Type** (type: options, field: `audioInputType`): Must be specified when the **Audio** option is activated. It defines how to provide audio, via an audio URL, or a binary file. - **Audio URL** (type: string, field: `audioUrl`): Must be specified when **Audio Input Type** is set to URL. Defines the URL of the audio file to use. - **Audio Data Field Name** (type: string, field: `audioBinaryPropertyName`): Must be specified when **Audio Input Type** is set to **Binary File**. Defines the binary field name for audio input. ### Generate video from image Generate a video from one or more images using Wan models. **Parameters** - **Model** (type: options, field: `modelId`): Image-to-video model to use (for example, Wan 2.6 Image-to-Video Flash). - **Input Type** (type: options, field: `inputType`): Defines how to provide the image, via an image URL, or a binary file. - **Image URL** (type: string, field: `imgUrl`): URL of the first-frame image to generate video from. - **Input Data Field Name** (type: string, field: `binaryPropertyName`): Binary field name to read the image from when using binary input. - **Prompt** (type: string, field: `prompt`): Optional text describing desired content and visual characteristics. - **Resolution** (type: options, field: `resolution`): Resolution tier (720P or 1080P). - **Duration (Seconds)** (type: number, field: `duration`): Duration in seconds (2–15). - **Shot Type** (type: options, field: `shotType`): Single or multi-shot narrative. - **Download Video** (type: boolean, field: `downloadVideo`): When true, download the generated video as binary data; otherwise only the URL is returned. - **Simplify Output** (type: boolean, field: `simplify`): Return a simplified response. **Options** - **Prompt Extend** (type: boolean, field: `promptExtend`): Automatically extend and enhance the prompt. - **Audio** (type: boolean, field: `audio`): Whether to generate audio for the video. - **Audio Input Type** (type: options, field: `audioInputType`): Defines how to provide audio, via an audio URL, or a binary file. - **Audio URL** (type: string, field: `audioUrl`): URL of the audio file to use, when **Audio Input Type** is set to URL. - **Audio Data Field Name** (type: string, field: `audioBinaryPropertyName`): Binary field name for audio input, when **Audio Input Type** is set to binary data. ## Templates and examples **Transform Product Photos into Studio-Quality Visuals with Nano Banana & Telegram** by SuS [View template details](https://n8n.io/workflows/8843-transform-product-photos-into-studio-quality-visuals-with-nano-banana-and-telegram/) **Generate Google Sheets test script from Pega Agile Studio user stories with AI** by Richard Nijsten [View template details](https://n8n.io/workflows/13093-generate-google-sheets-test-script-from-pega-agile-studio-user-stories-with-ai/) **Create sprint goals from Google Sheets with Pega Agile Studio and Google Gemini** by Richard Nijsten [View template details](https://n8n.io/workflows/13602-create-sprint-goals-from-google-sheets-with-pega-agile-studio-and-google-gemini/) [Browse Alibaba Cloud Model Studio integration templates](https://n8n.io/integrations/alibaba-cloud-model-studio/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Alibaba Cloud Model Studio documentation](https://www.alibabacloud.com/product/qwen) for more information about available models and API behavior. # Anthropic node Use the Anthropic node to automate work in Anthropic and integrate Anthropic with other applications. n8n has built-in support for a wide range of Anthropic features, including analyzing, uploading, getting, and deleting documents, files, and images, and generating, improving, or templatizing prompts. On this page, you'll find a list of operations the Anthropic node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/anthropic/). ## Operations - Document: - Analyze Document: Take in documents and answer questions about them. - File: - Upload File: Upload a file to the Anthropic API for later user. - Get File Metadata: Get metadata for a file from the Anthropic API. - List Files: List files from the Anthropic API. - Delete File: Delete a file from the Anthropic API. - Image: - Analyze Image: Take in images and answer questions about them. - Prompt: - Generate Prompt: Generate a prompt for a model. - Improve Prompt: Improve a prompt for a model. - Templatize Prompt: Templatize a prompt for a model. - Text: - Message a Model: Create a completion with an Anthropic model. ## Templates and examples **Notion AI Assistant Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2415-notion-ai-assistant-generator/) **Gmail AI Email Manager** by Max Mitcham [View template details](https://n8n.io/workflows/4722-gmail-ai-email-manager/) **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) [Browse Anthropic integration templates](https://n8n.io/integrations/anthropic/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Anthropic's documentation](https://docs.anthropic.com/en/api/overview) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Gemini node Use the Google Gemini node to automate work in Google Gemini and integrate Google Gemini with other applications. n8n has built-in support for a wide range of Google Gemini features, including working with audio, videos, images, documents, and files to analyze, generate, and transcribe. On this page, you'll find a list of operations the Google Gemini node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/googleai/). ## Operations - Audio: - Analyze Audio: Take in audio and answer questions about it. - Transcribe a Recording: Transcribes audio into text. - Document: - Analyze Document: Take in documents and answer questions about them. - File Search: - Create File Search Store: Create a new File Search store for RAG (Retrieval Augmented Generation) - Delete File Search Store: Delete File Search Store - List File Search Stores: List all File Search stores owned by the user - Upload to File Search Store: Upload a file to a File Search store for RAG (Retrieval Augmented Generation) - Image: - Analyze Image: Take in images and answer questions about them. - Generate an Image: Creates an image from a text prompt. - Edit Image: Upload one or more images and apply edits based on a prompt - Media File: - Upload Media File: Upload a file to the Google Gemini API for later user. - Text: - Message a Model: Create a completion with a Google Gemini model. - Video: - Analyze Video: Take in videos and answer questions about them. - Generate a Video: Creates a video from a text prompt. - Download Video: Download a generated video from the Google Gemini API using a URL. ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) **Build Your First AI Agent** by Lucas Peyrin [View template details](https://n8n.io/workflows/6270-build-your-first-ai-agent/) [Browse Google Gemini integration templates](https://n8n.io/integrations/google-gemini/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Google Gemini's documentation](https://ai.google.dev/gemini-api/docs) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # MiniMax node The MiniMax node connects n8n workflows to MiniMax AI models. Use it to generate images from text prompts, synthesize speech, create videos from text or images, and send messages to MiniMax language models. Credentials You can find authentication information for this node [here](../../credentials/minimax/). ## Resources and operations - **Audio**: Convert text to speech using MiniMax speech synthesis models. - **Image**: Generate images from a text prompt. - **Text**: Send messages to a MiniMax language model and receive responses. - **Video**: Generate videos from a text prompt or a first-frame image. ### Text to speech Convert text to speech using a MiniMax speech synthesis model. **Parameters** - **Model** (type: options, field: `modelId`): The speech synthesis model to use. Default: `speech-2.8-hd` - **Text** (type: string, field: `text`): The text to convert to speech. Maximum 10,000 characters. Required. - **Voice ID** (type: string, field: `voiceId`): The voice to use for synthesis. Browse available voices in the [MiniMax documentation](https://platform.minimax.io/docs/faq/system-voice-id). Default: `English_Graceful_Lady`. Required. - **Download Audio** (type: boolean, field: `downloadAudio`): Whether to download the generated audio as binary data. When disabled, the node returns only the audio URL. Default: `true` **Options** - **Audio Format** (type: options, field: `audioFormat`): Output audio format. Options: MP3, PCM, FLAC, WAV. The node supports WAV in non-streaming mode only. Default: `mp3` - **Emotion** (type: options, field: `emotion`): Emotion to apply to the synthesized speech. The model automatically selects the most natural emotion. Default: `calm` - **Language Boost** (type: options, field: `languageBoost`): Enhance recognition accuracy for a specific language. Default: `auto` - **Pitch** (type: number, field: `pitch`): Pitch adjustment for the speech, from –12 to 12. `0` keeps the original pitch. Default: `0` - **Speed** (type: number, field: `speed`): Speech speed from 0.5 to 2. Higher values produce faster speech. Default: `1` - **Volume** (type: number, field: `volume`): Speech volume from 0.1 to 10. Higher values produce louder speech. Default: `1` ### Generate an image Create an image from a text prompt using a MiniMax image generation model. **Parameters** - **Model** (type: options, field: `modelId`): The image generation model to use. Default: `image-01` - **Prompt** (type: string, field: `prompt`): Text description of the image to generate. Maximum 1500 characters. Required. - **Aspect Ratio** (type: options, field: `aspectRatio`): The aspect ratio of the generated image. Options include 1:1, 16:9, 9:16, 4:3, 3:4, 3:2, 2:3, and 21:9. Default: `1:1` - **Number of Images** (type: number, field: `numberOfImages`): Number of images to generate per request, from one to nine. Default: `1` - **Download Image** (type: boolean, field: `downloadImage`): Whether to download the generated image as binary data. When disabled, the node returns only the image URL. Default: `true` **Options** - **Prompt Optimizer** (type: boolean, field: `promptOptimizer`): Whether to automatically optimize the prompt for better results. Default: `false` - **Seed** (type: number, field: `seed`): Random seed for reproducible outputs. Using the same seed with the same parameters produces the same image. Default: `0` ### Message a model Send one or more messages to a MiniMax language model and receive its response. **Parameters** - **Model** (type: options, field: `modelId`): The language model to use. Default: `MiniMax-M2.7` - **Messages** (type: fixedCollection, field: `messages`): One or more messages forming the conversation. Each message has: - **Prompt** (type: string, field: `content`): The text content of the message. - **Role** (type: options, field: `role`): The role of the message sender. Use `user` to send a message and receive a response, or `assistant` to set the model's tone or persona. - **Simplify Output** (type: boolean, field: `simplify`): When enabled, the node returns a simplified response rather than the full raw API output. Default: `true` **Options** - **Hide Thinking** (type: boolean, field: `hideThinking`): Strip chain-of-thought reasoning from the response, returning only the final answer. Default: `true` - **Maximum Number of Tokens** (type: number, field: `maxTokens`): Maximum number of tokens to generate. Default: `1024` - **Max Tool Calls Iterations** (type: number, field: `maxToolsIterations`): Maximum number of tool-iteration cycles the model runs before stopping. One iteration can include multiple tool calls. Set to `0` for no limit. Default: `15` - **Output Randomness (Temperature)** (type: number, field: `temperature`): Controls the randomness of the output. Lower values make the output more deterministic and repetitive. Range: 0–1. Default: `0.7` - **Output Randomness (Top P)** (type: number, field: `topP`): Maximum cumulative probability of tokens to consider when sampling. Range: 0–1. Default: `0.95` - **System Message** (type: string, field: `system`): A system-level instruction that guides the model's behavior and tone. ### Generate video from text Generate a video from a text prompt using a MiniMax video generation model. **Parameters** - **Model** (type: options, field: `modelId`): The video generation model to use. Default: `MiniMax-Hailuo-2.3` - **Prompt** (type: string, field: `prompt`): Text description of the video. Maximum 2000 characters. You can control camera movements using `[command]` syntax, for example `[Push in]` or `[Pan left]`. Required. - **Duration (Seconds)** (type: options, field: `duration`): Duration of the generated video. Options: 6 seconds or 10 seconds. Default: `6` - **Resolution** (type: options, field: `resolution`): Resolution of the generated video. Available options depend on the model. Options: 720P, 768P, 1080P. Default: `768P` - **Download Video** (type: boolean, field: `downloadVideo`): Whether to download the generated video as binary data. When disabled, the node returns only the video URL. Default: `true` **Options** - **Prompt Optimizer** (type: boolean, field: `promptOptimizer`): Whether to automatically optimize the prompt for better results. Default: `true` ### Generate video from image Generate a video using an image as the first frame. **Parameters** - **Model** (type: options, field: `modelId`): The video generation model to use. Default: `MiniMax-Hailuo-2.3` - **Image Input Type** (type: options, field: `imageInputType`): How to provide the first frame image. Options: URL or Binary File. Default: `url` - **Image URL** (type: string, field: `imageUrl`): Public URL of the first frame image. Supports JPG, JPEG, PNG, and WebP files up to 20MB. Displayed when **Image Input Type** is `URL`. Required. - **Input Data Field Name** (type: string, field: `binaryPropertyName`): The name of the input field containing the binary image data. Displayed when **Image Input Type** is `Binary File`. Default: `data`. Required. - **Prompt** (type: string, field: `prompt`): Optional text description of the video. Maximum 2000 characters. You can control camera movements using `[command]` syntax, for example `[Zoom in]`. - **Duration (Seconds)** (type: options, field: `duration`): Duration of the generated video. Options: 6 seconds or 10 seconds. Default: `6` - **Resolution** (type: options, field: `resolution`): Resolution of the generated video. Available options depend on the model. Options: 512P, 720P, 768P, 1080P. Default: `768P` - **Download Video** (type: boolean, field: `downloadVideo`): Whether to download the generated video as binary data. When disabled, the node returns only the video URL. Default: `true` **Options** - **Prompt Optimizer** (type: boolean, field: `promptOptimizer`): Whether to automatically optimize the prompt. Default: `true` - **Last Frame Image Input Type** (type: options, field: `lastFrameInputType`): Provide a last frame image to generate a first-and-last-frame video. MiniMax-Hailuo-2.3 and MiniMax-Hailuo-02 support this option only. Default: `none` - **Last Frame Image URL** (type: string, field: `lastFrameImageUrl`): Public URL of the last frame image. Displayed when **Last Frame Image Input Type** is `URL`. - **Last Frame Data Field Name** (type: string, field: `lastFrameBinaryPropertyName`): The binary field name containing the last frame image. Displayed when **Last Frame Image Input Type** is `Binary File`. Default: `lastFrame` - **Subject Reference Input Type** (type: options, field: `subjectReferenceInputType`): Provide a face photo to maintain facial consistency in the generated video. MiniMax-Hailuo-2.3 supports this option only. Default: `none` - **Subject Reference Image URL** (type: string, field: `subjectReferenceImageUrl`): Public URL of the reference face image. Displayed when **Subject Reference Input Type** is `URL`. - **Subject Reference Data Field Name** (type: string, field: `subjectReferenceBinaryPropertyName`): The binary field name containing the reference face image. Displayed when **Subject Reference Input Type** is `Binary File`. Default: `subjectReference` ## Templates and examples **Transform Images into AI Videos with MiniMax Hailuo 02. Upload to YouTube/TikTok** by Davide Boizza [View template details](https://n8n.io/workflows/5633-transform-images-into-ai-videos-with-minimax-hailuo-02-upload-to-youtubetiktok/) **Generate Song Lyrics and Music from Text Prompts using OpenAI and Fal.ai Minimax** by Daniel Nkencho [View template details](https://n8n.io/workflows/10005-generate-song-lyrics-and-music-from-text-prompts-using-openai-and-falai-minimax/) **Ebook to Audiobook converter using MiniMax and FFmpeg** by Jay Emp0 [View template details](https://n8n.io/workflows/9944-ebook-to-audiobook-converter-using-minimax-and-ffmpeg/) [Browse MiniMax integration templates](https://n8n.io/integrations/minimax/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to the [MiniMax documentation](https://platform.minimax.io/docs) for more information about the service. # Moonshot Kimi node The Moonshot Kimi node connects n8n workflows to Moonshot Kimi AI models. Use it to send prompts and receive model responses, attach images to messages, or analyze images with an image-analysis model. Credentials You can find authentication information for this node [here](../../credentials/moonshot/). ## Resources and operations - **Analyze image**: Analyze images and answer questions about them. - **Message a model**: Send text-based messages to a Moonshot Kimi model and receive responses (supports attachments, system messages, and advanced options like thinking mode and web search). ### Analyze image Analyze an image and answer questions about it. **Parameters** - **Model** (type: resourceLocator, field: `modelId`): Select the Moonshot Kimi model to use for analysis. - **Text Input** (type: string, field: `text`): A prompt or question to send along with the image. Default: `What's in this image?` - **Input Data Field Name(s)** (type: string, field: `binaryPropertyName`): Name of the binary field(s) that contain the image(s). When you provide multiple fields, separate them with commas. Default: `data` - **Simplify Output** (type: boolean, field: `simplify`): When enabled, the node returns a simplified version of the response rather than the full raw API response. Default: `true` **Options** - **Maximum Number of Tokens** (type: number, field: `maxTokens`): Fewer tokens produce shorter, less detailed image descriptions. Default: `1024` ### Message a model Send one or more messages to a Moonshot Kimi model and receive its response. Support role-based messages (user/assistant), attachments, system messages, and advanced generation options. **Parameters** - **Model** (type: resourceLocator, field: `modelId`): Select the Moonshot Kimi model to message. - **Messages** (type: fixedCollection, field: `messages`): one or more messages forming the conversation prompt. - content (type: string): The text content of the message. (Display name: Prompt) - role (type: options): Role of the message, for example, `user` or `assistant`, that guides how the model should respond. - **Add Attachments** (type: boolean, field: `addAttachments`): Whether to attach images to the message. Default: `false` - **Attachment Input Data Field Name(s)** (type: string, field: `binaryPropertyName`): Name of the binary field(s) containing images to attach. Separate multiple fields with commas. Default: `data` - **Simplify Output** (type: boolean, field: `simplify`): When you enable it, the node returns a simplified version of the response instead of the raw API output. Default: `true` **Options** - **Frequency Penalty** (type: number, field: `frequencyPenalty`): Positive values penalize tokens already present in the text, reducing repetition. Default: `0` - **Include Merged Response** (type: boolean, field: `includeMergedResponse`): Include a single output string that merges all text parts of the model's response. Default: `false` - **Maximum Number of Tokens** (type: number, field: `maxTokens`): Maximum tokens to generate for the completion. Default: `1024` - **Max Tool Calls Iterations** (type: number, field: `maxToolsIterations`): Maximum number of tool-iteration cycles the LLM will run before stopping. One iteration may include multiple tool calls. Set to `0` for no limit. Default: `15` - **Output Randomness (Temperature)** (type: number, field: `temperature`): Controls randomness of the output. Lower values make output more deterministic. Default: `0.7` - **Output Randomness (Top P)** (type: number, field: `topP`): Maximum cumulative probability of tokens to consider when sampling. Default: `1` - **Presence Penalty** (type: number, field: `presencePenalty`): Positive values penalize tokens based on whether they already appear in the text so far, encouraging new topics. Default: `0` - **Response Format** (type: options, field: `responseFormat`): Format of the returned response, for example, `text`. - **System Message** (type: string, field: `system`): A system-level instruction that guides the model's overall behavior and tone. - **Thinking Mode** (type: boolean, field: `thinkingMode`): When you enable it, the model includes reasoning steps in a chain-of-thought style. You can't use it together with **Web Search**. Default: `false` - **Web Search** (type: boolean, field: `webSearch`): When you enable it, the model performs built-in web searches for up-to-date information. You can't use it together with **Thinking Mode**. Default: `false` ## Templates and examples [Browse Moonshot Kimi integration templates](https://n8n.io/integrations/moonshot-kimi/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to the [Moonshot Kimi documentation](https://platform.kimi.ai/docs/overview) for more information about the service. # Perplexity node Use the Perplexity node to automate work in Perplexity and integrate Perplexity with other applications. n8n has built-in support for messaging a model. On this page, you'll find a list of operations the Perplexity node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../credentials/perplexity/). ## Operations - **Message a Model**: Create one or more completions for a given text. ## Templates and examples **Clone Viral TikToks with AI Avatars & Auto-Post to 9 Platforms using Perplexity & Blotato** by Dr. Firas [View template details](https://n8n.io/workflows/4110-clone-viral-tiktoks-with-ai-avatars-and-auto-post-to-9-platforms-using-perplexity-and-blotato/) **🔍🛠️Generate SEO-Optimized WordPress Content with AI Powered Perplexity Research** by Joseph LePage [View template details](https://n8n.io/workflows/3291-generate-seo-optimized-wordpress-content-with-ai-powered-perplexity-research/) **AI-Powered Multi-Social Media Post Automation: Google Trends & Perplexity AI** by Gerald Denor [View template details](https://n8n.io/workflows/4352-ai-powered-multi-social-media-post-automation-google-trends-and-perplexity-ai/) [Browse Perplexity integration templates](https://n8n.io/integrations/perplexity/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Perplexity's documentation](https://docs.perplexity.ai/home) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Airtable node Use the Airtable node to automate work in Airtable, and integrate Airtable with other applications. n8n has built-in support for a wide range of Airtable features, including creating, reading, listing, updating and deleting tables. On this page, you'll find a list of operations the Airtable node supports and links to more resources. Credentials Refer to [Airtable credentials](../../credentials/airtable/) for guidance on setting up authentication. ## Operations - Append the data to a table - Delete data from a table - List data from a table - Read data from a table - Update data in a table ## Templates and examples **Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI** by Jimleuk [View template details](https://n8n.io/workflows/2342-handling-appointment-leads-and-follow-up-with-twilio-calcom-and-ai/) **Website Content Scraper & SEO Keyword Extractor with GPT-5-mini and Airtable** by Abhishek Patoliya [View template details](https://n8n.io/workflows/5657-website-content-scraper-and-seo-keyword-extractor-with-gpt-5-mini-and-airtable/) **AI-Powered Social Media Amplifier** by Mudit Juneja [View template details](https://n8n.io/workflows/2681-ai-powered-social-media-amplifier/) [Browse Airtable integration templates](https://n8n.io/integrations/airtable/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for Airtable. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.airtabletrigger/). Refer to [Airtable's documentation](https://airtable.com/developers/web/api/introduction) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Node reference ### Get the Record ID To fetch data for a particular record, you need the Record ID. There are two ways to get the Record ID. ### Create a Record ID column in Airtable To create a `Record ID` column in your table, refer to this [article](https://support.airtable.com/docs/finding-airtable-ids). You can then use this Record ID in your Airtable node. ### Use the List operation To get the Record ID of your record, you can use the **List** operation of the Airtable node. This operation will return the Record ID along with the fields. You can then use this Record ID in your Airtable node. ### Filter records when using the List operation To filter records from your Airtable base, use the **Filter By Formula** option. For example, if you want to return all the users that belong to the organization `n8n`, follow the steps mentioned below: 1. Select 'List' from the **Operation** dropdown list. 1. Enter the base ID and the table name in the **Base ID** and **Table** field, respectively. 1. Click on **Add Option** and select 'Filter By Formula' from the dropdown list. 1. Enter the following formula in the **Filter By Formula** field: `{Organization}='n8n'`. Similarly, if you want to return all the users that don't belong to the organization `n8n`, use the following formula: `NOT({Organization}='n8n')`. Refer to the Airtable [documentation](https://support.airtable.com/hc/en-us/articles/203255215-Formula-Field-Reference) to learn more about the formulas. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # Airtable node common issues Here are some common errors and issues with the [Airtable node](../) and steps to resolve or troubleshoot them. ## Forbidden - perhaps check your credentials This error displays when trying to perform actions not permitted by your current level of access. The full text looks something like this: ``` There was a problem loading the parameter options from server: "Forbidden - perhaps check your credentials?" ``` The error most often displays when the credential you're using doesn't have the scopes it requires on the resources you're attempting to manage. Refer to the [Airtable credentials](../../../credentials/airtable/) and [Airtables scopes documentation](https://airtable.com/developers/web/api/scopes) for more information. ## Service is receiving too many requests from you Airtable has a hard API limit on the number of requests generated using personal access tokens. If you send more than five requests per second per base, you will receive a 429 error, indicating that you have sent too many requests. You will have to wait 30 seconds before resuming requests. This same limit applies for sending more than 50 requests across all bases per access token. You can find out more in the [Airtable's rate limits documentation](https://airtable.com/developers/web/api/rate-limits). If you find yourself running into rate limits with the Airtable node, consider implementing one of the suggestions on the [handling rate limits](../../../rate-limits/) page. # Discord node Use the Discord node to automate work in Discord, and integrate Discord with other applications. n8n has built-in support for a wide range of Discord features, including sending messages in a Discord channel and managing channels. On this page, you'll find a list of operations the Discord node supports and links to more resources. Credentials Refer to [Discord credentials](../../credentials/discord/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - Channel - Create - Delete - Get - Get Many - Update - Message - Delete - Get - Get Many - React with Emoji - Send - Send and Wait for Response - Member - Get Many - Role Add - Role Remove ## Waiting for a response By choosing the **Send and Wait for a Response** operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information. ### Response Type You can choose between the following types of waiting and approval actions: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types: - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. - **Append n8n Attribution**: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off). ### Approval response customization When using the Approval response type, you can choose whether to present only an approval button or both approval *and* disapproval buttons. You can also customize the button labels for the buttons you include. ### Free Text response customization When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label. ### Custom Form response customization When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. You'll also be able to customize the message button label, the form title and description, and the response button label. ## Templates and examples **Fully Automated AI Video Generation & Multi-Platform Publishing** by Juan Carlos Cavero Gracia [View template details](https://n8n.io/workflows/3442-fully-automated-ai-video-generation-and-multi-platform-publishing/) **AI-Powered Short-Form Video Generator with OpenAI, Flux, Kling, and ElevenLabs** by Cameron Wills [View template details](https://n8n.io/workflows/3121-ai-powered-short-form-video-generator-with-openai-flux-kling-and-elevenlabs/) **Discord AI-powered bot** by Eduard [View template details](https://n8n.io/workflows/1938-discord-ai-powered-bot/) [Browse Discord integration templates](https://n8n.io/integrations/discord/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Discord's documentation](https://discord.com/developers/docs/intro) for more information about the service. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # Discord node common issues Here are some common errors and issues with the [Discord node](../) and steps to resolve or troubleshoot them. ## Add extra fields to embeds Discord messages can optionally include embeds, a rich preview component that can include a title, description, image, link, and more. The Discord node supports embeds when using the **Send** operation on the **Message** resource. Select **Add Embeds** to set extra fields including Description, Author, Title, URL, and URL Image. To add fields that aren't included by default, set **Input Method** to **Raw JSON**. From here, add a JSON object to the **Value** parameter defining the [field names](https://discord.com/developers/docs/resources/message#embed-object) and values you want to include. For example, to include `footer` and `fields`, neither of which are available using the **Enter Fields** Input Method, you could use a JSON object like this: ``` { "author": "My Name", "url": "https://discord.js.org", "fields": [ { "name": "Regular field title", "value": "Some value here" } ], "footer": { "text": "Some footer text here", "icon_url": "https://i.imgur.com/AfFp7pu.png" } } ``` You can learn more about embeds in [Using Webhooks and Embeds | Discord](https://discord.com/safety/using-webhooks-and-embeds). If you experience issues when working with embeds with the Discord node, you can use the [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) with your existing Discord credentials to `POST` to the following URL: ``` https://discord.com/api/v10/channels//messages ``` In the body, include your embed information in the message content like this: ``` { "content": "Test", "embeds": [ { "author": "My Name", "url": "https://discord.js.org", "fields": [ { "name": "Regular field title", "value": "Some value here" } ], "footer": { "text": "Some footer text here", "icon_url": "https://i.imgur.com/AfFp7pu.png" } } ] } ``` ## Mention users and channels To mention users and channels in Discord messages, you need to format your message according to [Discord's message formatting guidelines](https://discord.com/developers/docs/reference#message-formatting). To mention a user, you need to know the Discord user's user ID. Keep in mind that the user ID is different from the user's display name. Similarly, you need a channel ID to link to a specific channel. You can learn how to enable developer mode and copy the user or channel IDs in [Discord's documentation on finding User/Server/Message IDs](https://support.discord.com/hc/en-us/articles/206346498-Where-can-I-find-my-User-Server-Message-ID). Once you have the user or channel ID, you can format your message with the following syntax: - **User**: `<@USER_ID>` - **Channel**: `<#CHANNEL_ID>` - **Role**: `<@&ROLE_ID>` # Gmail node Use the Gmail node to automate work in Gmail, and integrate Gmail with other applications. n8n has built-in support for a wide range of Gmail features, including creating, updating, deleting, and getting drafts, messages, labels, thread. On this page, you'll find a list of operations the Gmail node supports and links to more resources. Credentials Refer to [Google credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - **Draft** - [**Create**](draft-operations/#create-a-draft) a draft - [**Delete**](draft-operations/#delete-a-draft) a draft - [**Get**](draft-operations/#get-a-draft) a draft - [**Get Many**](draft-operations/#get-many-drafts) drafts - **Label** - [**Create**](label-operations/#create-a-label) a label - [**Delete**](label-operations/#delete-a-label) a label - [**Get**](label-operations/#get-a-label) a label - [**Get Many**](label-operations/#get-many-labels) labels - **Message** - [**Add Label**](message-operations/#add-label-to-a-message) to a message - [**Delete**](message-operations/#delete-a-message) a message - [**Get**](message-operations/#get-a-message) a message - [**Get Many**](message-operations/#get-many-messages) messages - [**Mark as Read**](message-operations/#mark-as-read) - [**Mark as Unread**](message-operations/#mark-as-unread) - [**Remove Label**](message-operations/#remove-label-from-a-message) from a message - [**Reply**](message-operations/#reply-to-a-message) to a message - [**Send**](message-operations/#send-a-message) a message - **Thread** - [**Add Label**](thread-operations/#add-label-to-a-thread) to a thread - [**Delete**](thread-operations/#delete-a-thread) a thread - [**Get**](thread-operations/#get-a-thread) a thread - [**Get Many**](thread-operations/#get-many-threads) threads - [**Remove Label**](thread-operations/#remove-label-from-a-thread) from thread - [**Reply**](thread-operations/#reply-to-a-message) to a message - [**Trash**](thread-operations/#trash-a-thread) a thread - [**Untrash**](thread-operations/#untrash-a-thread) a thread ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel** by Mihai Farcas [View template details](https://n8n.io/workflows/2275-automated-web-scraping-email-a-csv-save-to-google-sheets-and-microsoft-excel/) **Suggest meeting slots using AI** by n8n Team [View template details](https://n8n.io/workflows/1953-suggest-meeting-slots-using-ai/) [Browse Gmail integration templates](https://n8n.io/integrations/gmail/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to Google's [Gmail API documentation](https://developers.google.com/gmail/api) for detailed information about the API that this node integrates with. n8n provides a trigger node for Gmail. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.gmailtrigger/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # Gmail node common issues Here are some common errors and issues with the [Gmail node](../) and steps to resolve or troubleshoot them. ## Remove the n8n attribution from sent messages If you're using the node to [send a message](../message-operations/#send-a-message) or [reply to a message](../message-operations/#reply-to-a-message), the node appends this statement to the end of the email: > This email was sent automatically with n8n To remove this attribution: 1. In the node's **Options** section, select **Add option**. 1. Select **Append n8n attribution**. 1. Turn the toggle off. Refer to [Send options](../message-operations/#send-options) and [Reply options](../message-operations/#reply-options) for more information. ## Forbidden - perhaps check your credentials This error displays next to certain dropdowns in the node, like the **Label Names or IDs** dropdown. The full text looks something like this: ``` There was a problem loading the parameter options from server: "Forbidden - perhaps check your credentials?" ``` The error most often displays when you're using a Google Service Account as the credential and the credential doesn't have **Impersonate a User** turned on. Refer to [Google Service Account: Finish your n8n credential](../../../credentials/google/service-account/#finish-your-n8n-credential) for more information. ## 401 unauthorized error The full text of the error looks like this: ``` 401 - {"error":"unauthorized_client","error_description":"Client is unauthorized to retrieve access tokens using this method, or client not authorized for any of the scopes requested."} ``` This error occurs when there's an issue with the credential you're using and its scopes or permissions. To resolve: 1. For [OAuth2](../../../credentials/google/oauth-single-service/) credentials, make sure you've enabled the Gmail API in **APIs & Services > Library**. Refer to [Google OAuth2 Single Service - Enable APIs](../../../credentials/google/oauth-single-service/#enable-apis) for more information. 1. For [Service Account](../../../credentials/google/service-account/) credentials: 1. [Enable domain-wide delegation](../../../credentials/google/service-account/#enable-domain-wide-delegation). 1. Make sure you add the Gmail API as part of the domain-wide delegation configuration. ## Bad request - please check your parameters This error most often occurs if you enter a Message ID, Thread ID, or Label ID that doesn't exist. Try a **Get** operation with the ID to confirm it exists. # Gmail node Draft Operations Use the Draft operations to create, delete, or get a draft or list drafts in Gmail. Refer to the [Gmail node](../) for more information on the Gmail node itself. ## Create a draft Use this operation to create a new draft. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Draft**. - **Operation**: Select **Create**. - **Subject**: Enter the subject line. - Select the **Email Type**. Choose from **Text** or **HTML**. - **Message**: Enter the email message body. ### Create draft options Use these options to further refine the node's behavior: - **Attachments**: Select **Add Attachment** to add an attachment. Enter the **Attachment Field Name (in Input)** to identify which field from the input node contains the attachment. - For multiple properties, enter a comma-separated list. - **BCC**: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **CC**: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **From Alias Name or ID**: Select an alias to send the draft from. This field populates based on the credential you selected in the parameters. - **Send Replies To**: Enter an email address to set as the reply to address. - **Thread ID**: If you want this draft attached to a thread, enter the ID for that thread. - **To Email**: Enter one or more email addresses for recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. Refer to the [Gmail API Method: users.drafts.create](https://developers.google.com/gmail/api/reference/rest/v1/users.drafts/create) documentation for more information. ## Delete a draft Use this operation to delete a draft. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Draft**. - **Operation**: Select **Delete**. - **Draft ID**: Enter the ID of the draft you wish to delete. Refer to the [Gmail API Method: users.drafts.delete](https://developers.google.com/gmail/api/reference/rest/v1/users.drafts/delete) documentation for more information. ## Get a draft Use this operation to get a single draft. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Draft**. - **Operation**: Select **Get**. - **Draft ID**: Enter the ID of the draft you wish to get information about. ### Get draft options Use these options to further refine the node's behavior: - **Attachment Prefix**: Enter a prefix for the name of the binary property the node should write any attachments to. n8n adds an index starting with `0` to the prefix. For example, if you enter \`attachment\_' as the prefix, the first attachment saves to 'attachment_0'. - **Download Attachments**: Select whether the node should download the draft's attachments (turned on) or not (turned off). Refer to the [Gmail API Method: users.drafts.get](https://developers.google.com/gmail/api/reference/rest/v1/users.drafts/get) documentation for more information. ## Get Many drafts Use this operation to get two or more drafts. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Draft**. - **Operation**: Select **Get Many**. - **Return All**: Choose whether the node returns all drafts (turned on) or only up to a set limit (turned off). - **Limit**: Enter the maximum number of drafts to return. Only used if you've turned off **Return All**. ### Get Many drafts options Use these options to further refine the node's behavior: - **Attachment Prefix**: Enter a prefix for the name of the binary property the node should write any attachments to. n8n adds an index starting with `0` to the prefix. For example, if you enter \`attachment\_' as the prefix, the first attachment saves to 'attachment_0'. - **Download Attachments**: Select whether the node should download the draft's attachments (turned on) or not (turned off). - **Include Spam and Trash**: Select whether the node should get drafts in the Spam and Trash folders (turned on) or not (turned off). Refer to the [Gmail API Method: users.drafts.list](https://developers.google.com/gmail/api/reference/rest/v1/users.drafts/list) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # Gmail node Label Operations Use the Label operations to create, delete, or get a label or list labels in Gmail. Refer to the [Gmail node](../) for more information on the Gmail node itself. ## Create a label Use this operation to create a new label. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Label**. - **Operation**: Select **Create**. - **Name**: Enter a display name for the label. ### Create label options Use these options to further refine the node's behavior: - **Label List Visibility**: Sets the visibility of the label in the label list in the Gmail web interface. Choose from: - **Hide**: Don't show the label in the label list. - **Show** (default): Show the label in the label list. - **Show if Unread**: Show the label if there are any unread messages with that label. - **Message List Visibility**: Sets the visibility of messages with this label in the message list in the Gmail web interface. Choose whether to **Show** or **Hide** messages with this label. Refer to the [Gmail API Method: users.labels.create](https://developers.google.com/gmail/api/reference/rest/v1/users.labels/create) documentation for more information. ## Delete a label Use this operation to delete an existing label. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Label**. - **Operation**: Select **Delete**. - **Label ID**: Enter the ID of the label you want to delete. Refer to the [Gmail API Method: users.labels.delete](https://developers.google.com/gmail/api/reference/rest/v1/users.labels/delete) documentation for more information. ## Get a label Use this operation to get an existing label. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Label**. - **Operation**: Select **Get**. - **Label ID**: Enter the ID of the label you want to get. Refer to the [Gmail API Method: users.labels.get](https://developers.google.com/gmail/api/reference/rest/v1/users.labels/get) documentation for more information. ## Get Many labels Use this operation to get two or more labels. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Label**. - **Operation**: Select **Get Many**. - **Return All**: Choose whether the node returns all labels (turned on) or only up to a set limit (turned off). - **Limit**: Enter the maximum number of labels to return. Only used if you've turned off **Return All**. Refer to the [Gmail API Method: users.labels.list](https://developers.google.com/gmail/api/reference/rest/v1/users.labels/list) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # Gmail node Message Operations Use the Message operations to send, reply to, delete, mark read or unread, add a label to, remove a label from, or get a message or get a list of messages in Gmail. Refer to the [Gmail node](../) for more information on the Gmail node itself. Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../../advanced-ai/human-in-the-loop-tools/). ## Add Label to a message Use this operation to add one or more labels to a message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Add Label**. - **Message ID**: Enter the ID of the message you want to add the label to. - **Label Names or IDs**: Select the Label names you want to add or enter an expression to specify IDs. The dropdown populates based on the **Credential** you selected. Refer to the [Gmail API Method: users.messages.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/modify) documentation for more information. ## Delete a message Use this operation to immediately and permanently delete a message. Permanent deletion This operation can't be undone. For recoverable deletions, use the [Thread Trash operation](../thread-operations/#trash-a-thread) instead. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Delete**. - **Message ID**: Enter the ID of the message you want to delete. Refer to the [Gmail API Method: users.messages.delete](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/delete) documentation for more information. ## Get a message Use this operation to get a single message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Get**. - **Message ID**: Enter the ID of the message you wish to retrieve. - **Simplify**: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on. - This is the same as setting the `format` for the API call to `metadata`, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject. Refer to the [Gmail API Method: users.messages.get](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/get) documentation for more information. ## Get Many messages Use this operation to get two or more messages. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Get Many**. - **Return All**: Choose whether the node returns all messages (turned on) or only up to a set limit (turned off). - **Limit**: Enter the maximum number of messages to return. Only used if you've turned off **Return All**. - **Simplify**: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on. - This is the same as setting the `format` for the API call to `metadata`, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject. ### Get Many messages filters Use these filters to further refine the node's behavior: - **Include Spam and Trash**: Select whether the node should get messages in the Spam and Trash folders (turned on) or not (turned off). - **Label Names or IDs**: Only return messages with the selected labels added to them. Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the **Credential** you selected. - **Search**: Enter Gmail search refine filters, like `from:`, to filter the messages returned. Refer to [Refine searches in Gmail](https://support.google.com/mail/answer/7190?hl=en) for more information. - **Read Status**: Choose whether to receive **Unread and read emails**, **Unread emails only** (default), or **Read emails only**. - **Received After**: Return only those emails received after the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) for more information on formatting the string. - **Received Before**: Return only those emails received before the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) for more information on formatting the string. - **Sender**: Enter an email or a part of a sender name to return messages from only that sender. Refer to the [Gmail API Method: users.messages.list](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/list) documentation for more information. ## Mark as Read Use this operation to mark a message as read. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Mark as Read**. - **Message ID**: Enter the ID of the message you wish to mark as read. Refer to the [Gmail API Method: users.messages.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/modify) documentation for more information. ## Mark as Unread Use this operation to mark a message as unread. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Mark as Unread**. - **Message ID**: Enter the ID of the message you wish to mark as unread. Refer to the [Gmail API Method: users.messages.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/modify) documentation for more information. ## Remove Label from a message Use this operation to remove one or more labels from a message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Remove Label**. - **Message ID**: Enter the ID of the message you want to remove the label from. - **Label Names or IDs**: Select the Label names you want to remove or enter an expression to specify IDs. The dropdown populates based on the **Credential** you selected. Refer to the [Gmail API Method: users.messages.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/modify) documentation for more information. ## Reply to a message Use this operation to send a message as a reply to an existing message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Reply**. - **Message ID**: Enter the ID of the message you want to reply to. - Select the **Email Type**. Choose from **Text** or **HTML**. - **Message**: Enter the email message body. ### Reply options Use these options to further refine the node's behavior: - **Append n8n attribution**: By default, the node appends the statement `This email was sent automatically with n8n` to the end of the email. To remove this statement, turn this option off. - **Attachments**: Select **Add Attachment** to add an attachment. Enter the **Attachment Field Name (in Input)** to identify which field from the input node contains the attachment. - For multiple properties, enter a comma-separated list. - **BCC**: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **CC**: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **Sender Name**: Enter the name you want displayed in your recipients' email as the sender. - **Reply to Sender Only**: Choose whether to reply all (turned off) or reply to the sender only (turned on). Refer to the [Gmail API Method: users.messages.send](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/send) documentation for more information. ## Send a message Use this operation to send a message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Send**. - **To**: Enter the email address you want the email sent to. - **Subject**: Enter the subject line. - Select the **Email Type**. Choose from **Text** or **HTML**. - **Message**: Enter the email message body. ### Send options Use these options to further refine the node's behavior: - **Append n8n attribution**: By default, the node appends the statement `This email was sent automatically with n8n` to the end of the email. To remove this statement, turn this option off. - **Attachments**: Select **Add Attachment** to add an attachment. Enter the **Attachment Field Name (in Input)** to identify which field from the input node contains the attachment. - For multiple properties, enter a comma-separated list. - **BCC**: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **CC**: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **Sender Name**: Enter the name you want displayed in your recipients' email as the sender. - **Send Replies To**: Enter an email address to set as the reply to address. - **Reply to Sender Only**: Choose whether to reply all (turned off) or reply to the sender only (turned on). Refer to the [Gmail API Method: users.messages.send](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/send) documentation for more information. ## Send a message and wait for approval Use this operation to send a message and wait for approval from the recipient before continuing the workflow execution. Use Wait for complex approvals The **Send and Wait for Approval** operation is well-suited for simple approval processes. For more complex approvals, consider using the [Wait node](../../../core-nodes/n8n-nodes-base.wait/). Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Message**. - **Operation**: Select **Send and Wait for Approval**. - **To**: Enter the email address you want the email sent to. - **Subject**: Enter the subject line. - **Message**: Enter the email message body. ### Send and wait for approval options Use these options to further refine the node's behavior: - **Type of Approval**: Choose **Approve Only** (default) to include only an approval button or **Approve and Disapprove** to also include a disapproval option. - **Approve Button Label**: The label to use for the approval button (**Approve** by default). - **Approve Button Style**: Whether to style the approval button as a **Primary** (default) or **Secondary** button. - **Disapprove Button Label**: The label to use for the disapproval button (**Decline** by default). Only visible when you set **Type of Approval** to **Approve and Disapprove**. - **Disapprove Button Style**: Whether to style the disapproval button as a **Primary** or **Secondary** (default) button. Only visible when you set **Type of Approval** to **Approve and Disapprove**. Refer to the [Gmail API Method: users.messages.send](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/send) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # Gmail node Thread Operations Use the Thread operations to delete, reply to, trash, untrash, add/remove labels, get one, or list threads. Refer to the [Gmail node](../) for more information on the Gmail node itself. ## Add Label to a thread Use this operation to create a new draft. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Add Label**. - **Thread ID**: Enter the ID of the thread you want to add the label to. - **Label Names or IDs**: Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the **Credential** you selected. Refer to the [Gmail API Method: users.threads.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/modify) documentation for more information. ## Delete a thread Use this operation to immediately and permanently delete a thread and all its messages. Permanent deletion This operation can't be undone. For recoverable deletions, use the [Trash operation](#trash-a-thread) instead. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Delete**. - **Thread ID**: Enter the ID of the thread you want to delete. Refer to the [Gmail API Method: users.threads.delete](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/delete) documentation for more information. ## Get a thread Use this operation to get a single thread. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Get**. - **Thread ID**: Enter the ID of the thread you wish to retrieve. - **Simplify**: Choose whether to return a simplified version of the response (turned on) or the raw data (turned off). Default is on. - This is the same as setting the `format` for the API call to `metadata`, which returns email message IDs, labels, and email headers, including: From, To, CC, BCC, and Subject. ### Get thread options Use these options to further refine the node's behavior: - **Return Only Messages**: Choose whether to return only thread messages (turned on). Refer to the [Gmail API Method: users.threads.get](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/get) documentation for more information. ## Get Many threads Use this operation to get two or more threads. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Get Many**. - **Return All**: Choose whether the node returns all threads (turned on) or only up to a set limit (turned off). - **Limit**: Enter the maximum number of threads to return. Only used if you've turned off **Return All**. ### Get Many threads filters Use these filters to further refine the node's behavior: - **Include Spam and Trash**: Select whether the node should get threads in the Spam and Trash folders (turned on) or not (turned off). - **Label Names or IDs**: Only return threads with the selected labels added to them. Select the Label names you want to apply or enter an expression to specify IDs. The dropdown populates based on the **Credential** you selected. - **Search**: Enter Gmail search refine filters, like `from:`, to filter the threads returned. Refer to [Refine searches in Gmail](https://support.google.com/mail/answer/7190?hl=en) for more information. - **Read Status**: Choose whether to receive **Unread and read emails**, **Unread emails only** (default), or **Read emails only**. - **Received After**: Return only those emails received after the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) for more information on formatting the string. - **Received Before**: Return only those emails received before the specified date and time. Use the date picker to select the day and time or enter an expression to set a date as a string in ISO format or a timestamp in milliseconds. Refer to [ISO 8601](https://en.wikipedia.org/wiki/ISO_8601) for more information on formatting the string. Refer to the [Gmail API Method: users.threads.list](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/list) documentation for more information. ## Remove label from a thread Use this operation to remove a label from a thread. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Remove Label**. - **Thread ID**: Enter the ID of the thread you want to remove the label from. - **Label Names or IDs**: Select the Label names you want to remove or enter an expression to specify their IDs. The dropdown populates based on the **Credential** you selected. Refer to the [Gmail API Method: users.threads.modify](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/modify) documentation for more information. ## Reply to a message Use this operation to reply to a message. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Reply**. - **Thread ID**: Enter the ID of the thread you want to reply to. - **Message Snippet or ID**: Select the Message you want to reply to or enter an expression to specify its ID. The dropdown populates based on the **Credential** you selected. - Select the **Email Type**. Choose from **Text** or **HTML**. - **Message**: Enter the email message body. ### Reply options Use these options to further refine the node's behavior: - **Attachments**: Select **Add Attachment** to add an attachment. Enter the **Attachment Field Name (in Input)** to identify which field from the input node contains the attachment. - For multiple properties, enter a comma-separated list. - **BCC**: Enter one or more email addresses for blind copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **CC**: Enter one or more email addresses for carbon copy recipients. Separate multiple email addresses with a comma, for example `jay@gatsby.com, jon@smith.com`. - **Sender Name**: Enter the name you want displayed in your recipients' email as the sender. - **Reply to Sender Only**: Choose whether to reply all (turned off) or reply to the sender only (turned on). Refer to the [Gmail API Method: users.messages.send](https://developers.google.com/gmail/api/reference/rest/v1/users.messages/send) documentation for more information. ## Trash a thread Use this operation to move a thread and all its messages to the trash. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Trash**. - **Thread ID**: Enter the ID of the thread you want to move to the trash. Refer to the [Gmail API Method: users.threads.trash](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/trash) documentation for more information. ## Untrash a thread Use this operation to recover a thread and all its messages from the trash. Enter these parameters: - Select the **Credential to connect with** or create a new one. - **Resource**: Select **Thread**. - **Operation**: Select **Untrash**. - **Thread ID**: Enter the ID of the thread you want to move to the trash. Refer to the [Gmail API Method: users.threads.untrash](https://developers.google.com/gmail/api/reference/rest/v1/users.threads/untrash) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # Google Calendar node Use the Google Calendar node to automate work in Google Calendar, and integrate Google Calendar with other applications. n8n has built-in support for a wide range of Google Calendar features, including adding, retrieving, deleting and updating calendar events. On this page, you'll find a list of operations the Google Calendar node supports and links to more resources. Credentials Refer to [Google Calendar credentials](../../credentials/google/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - **Calendar** - [**Availability**](calendar-operations/#availability): If a time-slot is available in a calendar - **Event** - [**Create**](event-operations/#create): Add an event to calendar - [**Delete**](event-operations/#delete): Delete an event - [**Get**](event-operations/#get): Retrieve an event - [**Get Many**](event-operations/#get-many): Retrieve all events from a calendar - [**Update**](event-operations/#update): Update an event ## Templates and examples **AI Agent : Google calendar assistant using OpenAI** by Dataki [View template details](https://n8n.io/workflows/2703-ai-agent-google-calendar-assistant-using-openai/) **Build an MCP Server with Google Calendar and Custom Functions** by Solomon [View template details](https://n8n.io/workflows/3514-build-an-mcp-server-with-google-calendar-and-custom-functions/) **Actioning Your Meeting Next Steps using Transcripts and AI** by Jimleuk [View template details](https://n8n.io/workflows/2328-actioning-your-meeting-next-steps-using-transcripts-and-ai/) [Browse Google Calendar integration templates](https://n8n.io/integrations/google-calendar/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for Google Calendar. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.googlecalendartrigger/). Refer to [Google Calendar's documentation](https://developers.google.com/calendar/api/v3/reference) for more information about the service. View [example workflows and related content](https://n8n.io/integrations/google-calendar/) on n8n's website. # Google Calendar Calendar operations Use this operation to check availability in a calendar in Google Calendar. Refer to [Google Calendar](../) for more information on the Google Calendar node itself. ## Availability Use this operation to check if a time-slot is available in a calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Calendar**. - **Operation**: Select **Availability**. - **Calendar**: Choose a calendar you want to check against. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Start Time**: The start time for the time-slot you want to check. By default, uses an expression evaluating to the current time (`{{ $now }}`). - **End Time**: The end time for the time-slot you want to check. By default, uses an expression evaluating to an hour from now (`{{ $now.plus(1, 'hour') }}`). ### Options - **Output Format**: Select the format for the availability information: - **Availability**: Returns if there are already events overlapping with the given time slot or not. - **Booked Slots**: Returns the booked slots. - **RAW**: Returns the RAW data from the API. - **Timezone**: The timezone used in the response. By default, uses the n8n timezone. Refer to the [Freebusy: query | Google Calendar](https://developers.google.com/calendar/api/v3/reference/freebusy/query) API documentation for more information. # Google Calendar Event operations Use these operations to create, delete, get, and update events in Google Calendar. Refer to [Google Calendar](../) for more information on the Google Calendar node itself. ## Create Use this operation to add an event to a Google Calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Event**. - **Operation**: Select **Create**. - **Calendar**: Choose a calendar you want to add an event to. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Start Time**: The start time for the event. By default, uses an expression evaluating to the current time (`{{ $now }}`). - **End Time**: The end time for the event. By default, this uses an expression evaluating to an hour from now (`{{ $now.plus(1, 'hour') }}`). - **Use Default Reminders**: Whether to enable default reminders for the event according to the calendar configuration. ### Options - **All Day**: Whether the event is all day or not. - **Attendees**: Attendees to invite to the event. - **Color Name or ID**: The color of the event. Choose from the list or specify the ID using an expression. - **Conference Data**: Creates a conference link (Hangouts, Meet, etc.) and attaches it to the event. - **Description**: A description for the event. - **Guests Can Invite Others**: Whether attendees other than the organizer can invite others to the event. - **Guests Can Modify**: Whether attendees other than the organizer can modify the event. - **Guests Can See Other Guests**: Whether attendees other than the organizer can see who the event's attendees are. - **ID**: Opaque identifier of the event. - **Location**: Geographic location of the event as free-form text. - **Max Attendees**: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant. - **Repeat Frequency**: The repetition interval for recurring events. - **Repeat How Many Times?**: The number of instances to create for recurring events. - **Repeat Until**: The date at which recurring events should stop. - **RRULE**: Recurrence rule. When set, ignores the Repeat Frequency, Repeat How Many Times, and Repeat Until parameters. - **Send Updates**: Whether to send notifications about the creation of the new event. - **Show Me As**: Whether the event blocks time on the calendar. - **Summary**: The title of the event. Refer to the [Events: insert | Google Calendar](https://developers.google.com/calendar/api/v3/reference/events/insert) API documentation for more information. ## Delete Use this operation to delete an event from a Google Calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Event**. - **Operation**: Select **Delete**. - **Calendar**: Choose a calendar you want to delete an event from. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Event ID**: The ID of the event to delete. ### Options - **Send Updates**: Whether to send notifications about the deletion of the event. Refer to the [Events: delete | Google Calendar](https://developers.google.com/calendar/api/v3/reference/events/delete) API documentation for more information. ## Get Use this operation to retrieve an event from a Google Calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Event**. - **Operation**: Select **Get**. - **Calendar**: Choose a calendar you want to get an event from. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Event ID**: The ID of the event to get. ### Options - **Max Attendees**: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant. - **Return Next Instance of Recurrent Event**: Whether to return the next instance of a recurring event instead of the event itself. - **Timezone**: The timezone used in the response. By default, uses the n8n timezone. Refer to the [Events: get | Google Calendar](https://developers.google.com/calendar/api/v3/reference/events/get) API documentation for more information. ## Get Many Use this operation to retrieve more than one event from a Google Calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Event**. - **Operation**: Select **Get Many**. - **Calendar**: Choose a calendar you want to get an event from. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Return All**: Whether to return all results or only up to a given limit. - **Limit**: (When "Return All" isn't selected) The maximum number of results to return. - **After**: Retrieve events that occur after this time. At least part of the event must be after this time. By default, this uses an expression evaluating to the current time (`{{ $now }}`). Switch the field to "fixed" to select a date from a date widget. - **Before**: Retrieve events that occur before this time. At least part of the event must be before this time. By default, this uses an expression evaluating to the current time plus a week (`{{ $now.plus({ week: 1 }) }}`). Switch the field to "fixed" to select a date from a date widget. ### Options - **Fields**: Specify the fields to return. By default, returns a set of commonly used fields predefined by Google. Use "\*" to return all fields. You can find out more in [Google Calendar's documentation on working with partial resources](https://developers.google.com/calendar/api/guides/performance#partial). - **iCalUID**: Specifies an event ID (in the iCalendar format) to include in the response. - **Max Attendees**: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant. - **Order By**: The order to use for the events in the response. - **Query**: Free text search terms to find events that match. This searches all fields except for extended properties. - **Recurring Event Handling**: What to do for recurring events: - **All Occurrences**: Return all instances of the recurring event for the specified time range. - **First Occurrence**: Return the first event of a recurring event within the specified time range. - **Next Occurrence**: Return the next instance of a recurring event within the specified time range. - **Show Deleted**: Whether to include deleted events (with status equal to "cancelled") in the results. - **Show Hidden Invitations**: Whether to include hidden invitations in the results. - **Timezone**: The timezone used in the response. By default, uses the n8n timezone. - **Updated Min**: The lower bounds for an event's last modification time (as an [RFC 3339 timestamp](https://datatracker.ietf.org/doc/html/rfc3339)) Refer to the [Events: list | Google Calendar](https://developers.google.com/calendar/api/v3/reference/events/list) API documentation for more information. ## Update Use this operation to update an event in a Google Calendar. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Calendar credentials](../../../credentials/google/). - **Resource**: Select **Event**. - **Operation**: Select **Update**. - **Calendar**: Choose a calendar you want to add an event to. Select **From list** to choose the title from the dropdown list or **By ID** to enter a calendar ID. - **Event ID**: The ID of the event to update. - **Modify**: For recurring events, choose whether to update the recurring event or a specific instance of the recurring event. - **Use Default Reminders**: Whether to enable default reminders for the event according to the calendar configuration. - **Update Fields**: The fields of the event to update: - **All Day**: Whether the event is all day or not. - **Attendees**: Attendees to invite to the event. You can choose to either add attendees or replace the existing attendee list. - **Color Name or ID**: The color of the event. Choose from the list or specify the ID using an expression. - **Description**: A description for the event. - **End**: The end time of the event. - **Guests Can Invite Others**: Whether attendees other than the organizer can invite others to the event. - **Guests Can Modify**: Whether attendees other than the organizer can make changes to the event. - **Guests Can See Other Guests**: Whether attendees other than the organizer can see who the event's attendees are. - **ID**: Opaque identifier of the event. - **Location**: Geographic location of the event as free-form text. - **Max Attendees**: The maximum number of attendees to include in the response. If there are more than the specified number of attendees, only returns the participant. - **Repeat Frequency**: The repetition interval for recurring events. - **Repeat How Many Times?**: The number of instances to create for recurring events. - **Repeat Until**: The date at which recurring events should stop. - **RRULE**: Recurrence rule. When set, ignores the Repeat Frequency, Repeat How Many Times, and Repeat Until parameters. - **Send Updates**: Whether to send notifications about the creation of the new event. - **Show Me As**: Whether the event blocks time on the calendar. - **Start**: The start time of the event. - **Summary**: The title of the event. - **Visibility**: The visibility of the event: - **Confidential**: The event is private. This value is provided for compatibility. - **Default**: Uses the default visibility for events on the calendar. - **Public**: The event is public and the event details are visible to all readers of the calendar. - **Private**: The event is private and only event attendees may view event details. Refer to the [Events: update | Google Calendar](https://developers.google.com/calendar/api/v3/reference/events/update) API documentation for more information. # Google Drive node Use the Google Drive node to automate work in Google Drive, and integrate Google Drive with other applications. n8n has built-in support for a wide range of Google Drive features, including creating, updating, listing, deleting, and getting drives, files, and folders. On this page, you'll find a list of operations the Google Drive node supports and links to more resources. Credentials Refer to [Google Drive credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - **File** - [**Copy**](file-operations/#copy-a-file) a file - [**Create from text**](file-operations/#create-from-text) - [**Delete**](file-operations/#delete-a-file) a file - [**Download**](file-operations/#download-a-file) a file - [**Move**](file-operations/#move-a-file) a file - [**Share**](file-operations/#share-a-file) a file - [**Update**](file-operations/#update-a-file) a file - [**Upload**](file-operations/#upload-a-file) a file - **File/Folder** - [**Search**](file-folder-operations/#search-files-and-folders) files and folders - **Folder** - [**Create**](folder-operations/#create-a-folder) a folder - [**Delete**](folder-operations/#delete-a-folder) a folder - [**Share**](folder-operations/#share-a-folder) a folder - **Shared Drive** - [**Create**](shared-drive-operations/#create-a-shared-drive) a shared drive - [**Delete**](shared-drive-operations/#delete-a-shared-drive) a shared drive - [**Get**](shared-drive-operations/#get-a-shared-drive) a shared drive - [**Get Many**](shared-drive-operations/#get-many-shared-drives) shared drives - [**Update**](shared-drive-operations/#update-a-shared-drive) a shared drive ## Templates and examples **Fully Automated AI Video Generation & Multi-Platform Publishing** by Juan Carlos Cavero Gracia [View template details](https://n8n.io/workflows/3442-fully-automated-ai-video-generation-and-multi-platform-publishing/) **Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube** by Davide Boizza [View template details](https://n8n.io/workflows/4846-generate-ai-videos-with-google-veo3-save-to-google-drive-and-upload-to-youtube/) **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) [Browse Google Drive integration templates](https://n8n.io/integrations/google-drive/), or [search all templates](https://n8n.io/workflows/) ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Drive node common issues Here are some common errors and issues with the [Google Drive node](../) and steps to resolve or troubleshoot them. ## Google hasn't verified this app If using the OAuth authentication method, you might see the warning **Google hasn't verified this app**. To avoid this: - If your app **User Type** is **Internal**, create OAuth credentials from the same account you want to authenticate. - If your app **User Type** is **External**, you can add your email to the list of testers for the app: go to the [**Audience**](https://console.cloud.google.com/auth/audience) page and add the email you're signing in with to the list of **Test users**. If you need to use credentials generated by another account (by a developer or another third party), follow the instructions in [Google Cloud documentation | Authorization errors: Google hasn't verified this app](https://developers.google.com/nest/device-access/reference/errors/authorization#google_hasnt_verified_this_app). ## Google Cloud app becoming unauthorized For Google Cloud apps with **Publishing status** set to **Testing** and **User type** set to **External**, consent and tokens expire after seven days. Refer to [Google Cloud Platform Console Help | Setting up your OAuth consent screen](https://support.google.com/cloud/answer/10311615?hl=en#zippy=%2Ctesting) for more information. To resolve this, reconnect the app in the n8n credentials modal. ## Google Drive OAuth error If using the OAuth authentication method, you may see an error indicating that you can't sign in because the app doesn't meet Google's expectations for keeping apps secure. Most often, the actual cause of this issue is that the URLs don't match between Google's OAuth configuration and n8n. To avoid this, start by reviewing any links included in Google's error message. This will contain details about the exact error that occurred. If you are self-hostin n8n, check the n8n configuration items used to construct external URLs. Verify that the [`N8N_EDITOR_BASE_URL`](../../../../../hosting/configuration/environment-variables/deployment/) and [`WEBHOOK_URL`](../../../../../hosting/configuration/configuration-examples/webhook-url/) environment variables use fully qualified domains. ## Get recent files from Google Drive To retrieve recent files from Google Drive, you need to sort files by modification time. To do this, you need to search for existing files and retrieve their modification times. Next you can sort the files to find the most recent file and use another Google Drive node target the file by ID. The process looks like this: 1. Add a **Google Drive** node to your canvas. 1. Select the **File/Folder** resource and the **Search** operation. 1. Enable **Return All** to sort through all files. 1. Set the **What to Search** filter to **Files**. 1. In the **Options**, set the **Fields** to **All**. 1. Connect a **Sort** node to the output of the **Google Drive** node. 1. Choose **Simple** sort type. 1. Enter `modifiedTime` as the **Field Name** in the **Fields To Sort By** section. 1. Choose **Descending** sort order. 1. Add a **Limit** node to the output of the **Sort** node. 1. Set **Max Items** to **1** to keep the most recent file. 1. Connect another **Google Drive** node to the output of the **Limit** node. 1. Select **File** as the **Resource** and the operation of your choice. 1. In the **File** selection, choose **By ID**. 1. Select **Expression** and enter `{{ $json.id }}` as the expression. [View workflow file](/_workflows/integrations/builtin/app-nodes/n8n-nodes-base.googledrive/get-most-recent-file.json) # Google Drive File and Folder operations Use this operation to search for files and folders in Google Drive. Refer to [Google Drive](../) for more information on the Google Drive node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Search files and folders Use this operation to search for files and folders in a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File/Folder**. - **Operation**: Select **Search**. - **Search Method**: Choose how you want to search: - **Search File/Folder Name**: Fill out the **Search Query** with the name of the file or folder you want to search for. Returns files and folders that are partial matches for the query as well. - **Advanced Search**: Fill out the **Query String** to search for files and folders using [Google query string syntax](https://developers.google.com/drive/api/guides/search-files). - **Return All**: Choose whether to return all results or only up to a given limit. - **Limit**: The maximum number of items to return when **Return All** is disabled. - **Filter**: Choose whether to limit the scope of your search: - **Drive**: The drive you want to search in. By default, uses your personal "My Drive". Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - You can find the `driveId` by visiting the shared drive in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. - **Folder**: The folder to search in. Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. - You can find the `folderId` by visiting the shared folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/folderId`. - **What to Search**: Whether to search for **Files and Folders**, **Files**, or **Folders**. - **Include Trashed Items**: Whether to also return items in the Drive's trash. ### Options - **Fields**: Select the fields to return. Can be one or more of the following: **[All]**, **explicitlyTrashed**, **exportLinks**, **hasThumbnail**, **iconLink**, **ID**, **Kind**, **mimeType**, **Name**, **Permissions**, **Shared**, **Spaces**, **Starred**, **thumbnailLink**, **Trashed**, **Version**, or **webViewLink**. Refer to the [Method: files.list | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/list) API documentation for more information. # Google Drive File operations Use this operation to create, delete, change, and manage files in Google Drive. Refer to [Google Drive](../) for more information on the Google Drive node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Copy a file Use this operation to copy a file to a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Copy**. - **File**: Choose a file you want to copy. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. - **File Name**: The name to use for the new copy of the file. - **Copy In The Same Folder**: Choose whether to copy the file to the same folder. If disabled, set the following: - **Parent Drive**: Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - **Parent Folder**: Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. - You can find the `driveId` and `folderID` by visiting the shared drive or folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. ### Options - **Copy Requires Writer Permissions**: Select whether to enable readers and commenters to copy, print, or download the new file. - **Description**: A short description of the file. Refer to the [Method: files.copy | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/copy) API documentation for more information. ## Create from text Use this operation to create a new file in a drive from provided text. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Create From Text**. - **File Content**: Enter the file content to use to create the new file. - **File Name**: The name to use for the new file. - **Parent Drive**: Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - **Parent Folder**: Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. You can find the `driveId` and `folderID` by visiting the shared drive or folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. ### Options - **APP Properties**: A bundle of arbitrary key-value pairs which are private to the requesting app. - **Properties**: A bundle of arbitrary key-value pairs which are visible to all apps. - **Keep Revision Forever**: Choose whether to set the `keepForever` field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions. - **OCR Language**: An [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language code to help the OCR interpret the content during import. - **Use Content As Indexable Text**: Choose whether to mark the uploaded content as indexable text. - **Convert to Google Document**: Choose whether to create a Google Document instead of the default `.txt` format. You must enable the Google Docs API in the [Google API Console](https://console.cloud.google.com/apis/library/docs.googleapis.com) for this to work. Refer to the [Method: files.insert | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/insert) API documentation for more information. ## Delete a file Use this operation to delete a file from a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Delete**. - **File**: Choose a file you want to delete. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. ### Options - **Delete Permanently**: Choose whether to delete the file now instead of moving it to the trash. Refer to the [Method: files.delete | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/delete) API documentation for more information. ## Download a file Use this operation to download a file from a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Download**. - **File**: Choose a file you want to download. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. ### Options - **Put Output File in Field**: Choose the field name to place the binary file contents to make it available to following nodes. - **Google File Conversion**: Choose the formats to export as when downloading Google Files: - **Google Docs**: Choose the export format to use when downloading Google Docs files: **HTML**, **MS Word Document**, **Open Office Document**, **PDF**, **Rich Text (rtf)**, or **Text (txt)**. - **Google Drawings**: Choose the export format to use when downloading Google Drawing files: **JPEG**, **PDF**, **PNG**, or **SVG**. - **Google Slides**: Choose the export format to use when downloading Google Slides files: **MS PowerPoint**, **OpenOffice Presentation**, or **PDF**. - **Google Sheets**: Choose the export format to use when downloading Google Sheets files: **CSV**, **MS Excel**, **Open Office Sheet**, or **PDF**. - **File Name**: The name to use for the downloaded file. Refer to the [Method: files.get | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/get) API documentation for more information. ## Move a file Use this operation to move a file to a different location in a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Move**. - **File**: Choose a file you want to move. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. - **Parent Drive**: Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - **Parent Folder**: Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. You can find the `driveId` and `folderID` by visiting the shared drive or folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. Refer to the [Method: parents.insert | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/parents/insert) API documentation for more information. ## Share a file Use this operation to add sharing permissions to a file. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Share**. - **File**: Choose a file you want to share. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. - **Permissions**: The permissions to add to the file: - **Role**: Select what users can do with the file. Can be one of **Commenter**, **File Organizer**, **Organizer**, **Owner**, **Reader**, **Writer**. - **Type**: Select the scope of the new permission: - **User**: Grant permission to a specific user, defined by entering their **Email Address**. - **Group**: Grant permission to a specific group, defined by entering its **Email Address**. - **Domain**: Grant permission to a complete domain, defined by the **Domain**. - **Anyone**: Grant permission to anyone. Can optionally **Allow File Discovery** to make the file discoverable through search. ### Options - **Email Message**: A plain text custom message to include in the notification email. - **Move to New Owners Root**: Available when trying to transfer ownership while sharing an item not in a shared drive. When enabled, moves the file to the new owner's My Drive root folder. - **Send Notification Email**: Whether to send a notification email when sharing to users or groups. - **Transfer Ownership**: Whether to transfer ownership to the specified user and downgrade the current owner to writer permissions. - **Use Domain Admin Access**: Whether to perform the action as a domain administrator. Refer to the [REST Resources: files | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files) API documentation for more information. ## Update a file Use this operation to update a file. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Update**. - **File to Update**: Choose a file you want to update. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the file, or **By ID** to enter the `fileId`. - You can find the `fileId` in a shareable Google Drive file URL: `https://docs.google.com/document/d/fileId/edit#gid=0`. In your Google Drive, select **Share > Copy link** to get the shareable file URL. - **Change File Content**: Choose whether to send new binary data to replace the existing file content. If enabled, fill in the following: - **Input Data Field Name**: The name of the input field that contains the binary file data you wish to use. - **New Updated File Name**: A new name for the file if you want to update the filename. ### Options - **APP Properties**: A bundle of arbitrary key-value pairs which are private to the requesting app. - **Properties**: A bundle of arbitrary key-value pairs which are visible to all apps. - **Keep Revision Forever**: Choose whether to set the `keepForever` field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions. - **OCR Language**: An [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language code to help the OCR interpret the content during import. - **Use Content As Indexable Text**: Choose whether to mark the uploaded content as indexable text. - **Move to Trash**: Whether to move the file to the trash. Only possible for the file owner. - **Return Fields**: Return metadata fields about the file. Can be one or more of the following: **[All]**, **explicitlyTrashed**, **exportLinks**, **hasThumbnail**, **iconLink**, **ID**, **Kind**, **mimeType**, **Name**, **Permissions**, **Shared**, **Spaces**, **Starred**, **thumbnailLink**, **Trashed**, **Version**, or **webViewLink**. Refer to the [Method: files.update | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/update) API documentation for more information. ## Upload a file Use this operation to upload a file. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **File**. - **Operation**: Select **Upload**. - **Input Data Field Name**: The name of the input field that contains the binary file data you wish to use. - **File Name**: The name to use for the new file. - **Parent Drive**: Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - **Parent Folder**: Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. You can find the `driveId` and `folderID` by visiting the shared drive or folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. ### Options - **APP Properties**: A bundle of arbitrary key-value pairs which are private to the requesting app. - **Properties**: A bundle of arbitrary key-value pairs which are visible to all apps. - **Keep Revision Forever**: Choose whether to set the `keepForever` field in the new head revision. This only applies to files with binary content. You can keep a maximum of 200 revisions, after which you must delete the pinned revisions. - **OCR Language**: An [ISO 639-1](https://en.wikipedia.org/wiki/ISO_639-1) language code to help the OCR interpret the content during import. - **Use Content As Indexable Text**: Choose whether to mark the uploaded content as indexable text. - **Simplify Output**: Choose whether to return a simplified version of the response instead of including all fields. Refer to the [Method: files.insert | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/insert) API documentation for more information. # Google Drive Folder operations Use this operation to create, delete, and share folders in Google Drive. Refer to [Google Drive](../) for more information on the Google Drive node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Create a folder Use this operation to create a new folder in a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Folder**. - **Operation**: Select **Create**. - **Folder Name**: The name to use for the new folder. - **Parent Drive**: Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - **Parent Folder**: Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. You can find the `driveId` and `folderID` by visiting the shared drive or folder in your browser and copying the last URL component: `https://drive.google.com/drive/u/1/folders/driveId`. ### Options - **Simplify Output**: Choose whether to return a simplified version of the response instead of including all fields. - **Folder Color**: The color of the folder as an RGB hex string. Refer to the [Method: files.insert | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/insert) API documentation for more information. ## Delete a folder Use this operation to delete a folder from a drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Folder**. - **Operation**: Select **Delete**. - **Folder**: Choose a folder you want to delete. - Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. - You can find the `folderId` in a Google Drive folder URL: `https://drive.google.com/drive/u/0/folders/folderID`. ### Options - **Delete Permanently**: Choose whether to delete the folder now instead of moving it to the trash. Refer to the [Method: files.delete | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/delete) API documentation for more information. ## Share a folder Use this operation to add sharing permissions to a folder. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Folder**. - **Operation**: Select **Share**. - **Folder**: Choose a file you want to move. - Select **From list** to choose the folder from the dropdown list, **By URL** to enter the URL of the folder, or **By ID** to enter the `folderId`. - You can find the `folderId` in a Google Drive folder URL: `https://drive.google.com/drive/u/0/folders/folderID`. - **Permissions**: The permissions to add to the folder: - **Role**: Select what users can do with the folder. Can be one of **Commenter**, **File Organizer**, **Organizer**, **Owner**, **Reader**, **Writer**. - **Type**: Select the scope of the new permission: - **User**: Grant permission to a specific user, defined by entering their **Email Address**. - **Group**: Grant permission to a specific group, defined by entering its **Email Address**. - **Domain**: Grant permission to a complete domain, defined by the **Domain**. - **Anyone**: Grant permission to anyone. Can optionally **Allow File Discovery** to make the file discoverable through search. ### Options - **Email Message**: A plain text custom message to include in the notification email. - **Move to New Owners Root**: Available when trying to transfer ownership while sharing an item not in a shared drive. When enabled, moves the folder to the new owner's My Drive root folder. - **Send Notification Email**: Whether to send a notification email when sharing to users or groups. - **Transfer Ownership**: Whether to transfer ownership to the specified user and downgrade the current owner to writer permissions. - **Use Domain Admin Access**: Whether to perform the action as a domain administrator. Refer to the [REST Resources: files | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files) API documentation for more information. # Google Drive Shared Drive operations Use this operation to create, delete, get, and update shared drives in Google Drive. Refer to [Google Drive](../) for more information on the Google Drive node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Create a shared drive Use this operation to create a new shared drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Shared Drive**. - **Operation**: Select **Create**. - **Name**: The name to use for the new shared drive. ### Options - **Capabilities**: The capabilities to set for the new shared drive (see [REST Resources: drives | Google Drive](https://developers.google.com/drive/api/reference/rest/v3/drives) for more details): - **Can Add Children**: Whether the current user can add children to folders in this shared drive. - **Can Change Copy Requires Writer Permission Restriction**: Whether the current user can change the `copyRequiresWriterPermission` restriction on this shared drive. - **Can Change Domain Users Only Restriction**: Whether the current user can change the `domainUsersOnly` restriction on this shared drive. - **Can Change Drive Background**: Whether the current user can change the background on this shared drive. - **Can Change Drive Members Only Restriction**: Whether the current user can change the `driveMembersOnly` restriction on this shared drive. - **Can Comment**: Whether the current user can comment on files in this shared drive. - **Can Copy**: Whether the current user can copy files in this shared drive. - **Can Delete Children**: Whether the current user can delete children from folders in this shared drive. - **Can Delete Drive**: Whether the current user can delete this shared drive. This operation may still fail if there are items not in the trash in the shared drive. - **Can Download**: Whether the current user can download files from this shared drive. - **Can Edit**: Whether the current user can edit files from this shared drive. - **Can List Children**: Whether the current user can list the children of folders in this shared drive. - **Can Manage Members**: Whether the current user can add, remove, or change the role of members of this shared drive. - **Can Read Revisions**: Whether the current user can read the revisions resource of files in this shared drive. - **Can Rename Drive**: Whether the current user can rename this shared drive. - **Can Share**: Whether the current user can share files or folders in this shared drive. - **Can Trash Children**: Whether the current user can trash children from folders in this shared drive. - **Color RGB**: The color of this shared drive as an RGB hex string. - **Hidden**: Whether to hide this shared drive in the default view. - **Restrictions**: Restrictions to add to this shared drive (see [REST Resources: drives | Google Drive](https://developers.google.com/drive/api/reference/rest/v3/drives) for more details): - **Admin Managed Restrictions**: When enabled, restrictions here will override the similarly named fields to true for any file inside of this shared drive. - **Copy Requires Writer Permission**: Whether the options to copy, print, or download files inside this shared drive should be disabled for readers and commenters. - **Domain Users Only**: Whether to restrict access to this shared drive and items inside this shared drive to users of the domain to which this shared drive belongs. - **Drive Members Only**: Whether to restrict access to items inside this shared drive to its members. Refer to the [Method: drives.insert | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/drives/insert) API documentation for more information. ## Delete a shared drive Use this operation to delete a shared drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Shared Drive**. - **Operation**: Select **Delete**. - **Shared Drive**: Choose the shared drive want to delete. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - You can find the `driveId` in the URL for the shared Google Drive: `https://drive.google.com/drive/u/0/folders/driveID`. Refer to the [Method: drives.delete | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/drives/delete) API documentation for more information. ## Get a shared drive Use this operation to get a shared drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Shared Drive**. - **Operation**: Select **Get**. - **Shared Drive**: Choose the shared drive want to get. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - You can find the `driveId` in the URL for the shared Google Drive: `https://drive.google.com/drive/u/0/folders/driveID`. ### Options - **Use Domain Admin Access**: Whether to issue the request as a domain administrator. When enabled, grants the requester access if they're an administrator of the domain to which the shared drive belongs. Refer to the [Method: drives.get | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/drives/get) API documentation for more information. ## Get many shared drives Use this operation to get many shared drives. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Shared Drive**. - **Operation**: Select **Get Many**. - **Return All**: Choose whether to return all results or only up to a given limit. - **Limit**: The maximum number of items to return when **Return All** is disabled. - **Shared Drive**: Choose the shared drive want to get. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - You can find the `driveId` in the URL for the shared Google Drive: `https://drive.google.com/drive/u/0/folders/driveID`. ### Options - **Query**: The query string to use to search for shared drives. See [Search for shared drives | Google Drive](https://developers.google.com/drive/api/guides/search-shareddrives) for more information. - **Use Domain Admin Access**: Whether to issue the request as a domain administrator. When enabled, grants the requester access if they're an administrator of the domain to which the shared drive belongs. Refer to the [Method: drives.get | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/drives/get) API documentation for more information. ## Update a shared drive Use this operation to update a shared drive. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Drive credentials](../../../credentials/google/). - **Resource**: Select **Shared Drive**. - **Operation**: Select **Update**. - **Shared Drive**: Choose the shared drive you want to update. - Select **From list** to choose the drive from the dropdown list, **By URL** to enter the URL of the drive, or **By ID** to enter the `driveId`. - You can find the `driveId` in the URL for the shared Google Drive: `https://drive.google.com/drive/u/0/folders/driveID`. ### Update Fields - **Color RGB**: The color of this shared drive as an RGB hex string. - **Name**: The updated name for the shared drive. - **Restrictions**: Restrictions for this shared drive (see [REST Resources: drives | Google Drive](https://developers.google.com/drive/api/reference/rest/v3/drives) for more details): - **Admin Managed Restrictions**: When enabled, restrictions here will override the similarly named fields to true for any file inside of this shared drive. - **Copy Requires Writer Permission**: Whether the options to copy, print, or download files inside this shared drive should be disabled for readers and commenters. - **Domain Users Only**: Whether to restrict access to this shared drive and items inside this shared drive to users of the domain to which this shared drive belongs. - **Drive Members Only**: Whether to restrict access to items inside this shared drive to its members. Refer to the [Method: drives.update | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/drives/update) API documentation for more information. # Google Sheets Use the Google Sheets node to automate work in Google Sheets, and integrate Google Sheets with other applications. n8n has built-in support for a wide range of Google Sheets features, including creating, updating, deleting, appending, removing and getting documents. On this page, you'll find a list of operations the Google Sheets node supports and links to more resources. Credentials Refer to [Google Sheets credentials](../../credentials/google/) for guidance on setting up authentication. ## Operations - **Document** - [**Create**](document-operations/#create-a-spreadsheet) a spreadsheet. - [**Delete**](document-operations/#delete-a-spreadsheet) a spreadsheet. - **Sheet Within Document** - [**Append or Update Row**](sheet-operations/#append-or-update-row): Append a new row, or update the current one if it already exists. - [**Append Row**](sheet-operations/#append-row): Create a new row. - [**Clear**](sheet-operations/#clear-a-sheet) all data from a sheet. - [**Create**](sheet-operations/#create-a-new-sheet) a new sheet. - [**Delete**](sheet-operations/#delete-a-sheet) a sheet. - [**Delete Rows or Columns**](sheet-operations/#delete-rows-or-columns): Delete columns and rows from a sheet. - [**Get Row(s)**](sheet-operations/#get-rows): Read all rows in a sheet. - [**Update Row**](sheet-operations/#update-row): Update a row in a sheet. ## Templates and examples **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **Fully Automated AI Video Generation & Multi-Platform Publishing** by Juan Carlos Cavero Gracia [View template details](https://n8n.io/workflows/3442-fully-automated-ai-video-generation-and-multi-platform-publishing/) **Generate AI Videos with Google Veo3, Save to Google Drive and Upload to YouTube** by Davide Boizza [View template details](https://n8n.io/workflows/4846-generate-ai-videos-with-google-veo3-save-to-google-drive-and-upload-to-youtube/) [Browse Google Sheets integration templates](https://n8n.io/integrations/google-sheets/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Google Sheet's API documentation](https://developers.google.com/sheets/api) for more information about the service. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # Google Sheets node common issues Here are some common errors and issues with the [Google Sheets node](../) and steps to resolve or troubleshoot them. ## Append an array To insert an array of data into Google Sheets, you must convert the array into a valid JSON (key, value) format. To do so, consider using: 1. The [Split Out](../../../core-nodes/n8n-nodes-base.splitout/) node. 1. The [AI Transform](../../../core-nodes/n8n-nodes-base.aitransform/) node. For example, try entering something like: ``` Convert 'languages' array to JSON (key, value) pairs. ``` 1. The [Code node](../../../core-nodes/n8n-nodes-base.code/). ## Column names were updated after the node's setup You'll receive this error if the Google Sheet's column names have changed since you set up the node. To refresh the column names, re-select **Mapping Column Mode**. This should prompt the node to fetch the column names again. Once the column names refresh, update the node parameters. # Google Sheets Document operations Use this operation to create or delete a Google spreadsheet from Google Sheets. Refer to [Google Sheets](../) for more information on the Google Sheets node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Create a spreadsheet Use this operation to create a new spreadsheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Document**. - **Operation**: Select **Create**. - **Title**: Enter the title of the new spreadsheet you want to create. - **Sheets**: Add the **Title(s)** of the sheet(s) you want to create within the spreadsheet. ### Options - **Locale**: Enter the locale of the spreadsheet. This affects formatting details such as functions, dates, and currency. Use one of the following formats: - `en` (639-1) - `fil` (639-2 if no 639-1 format exists) - `en_US` (combination of ISO language and country). - Refer to [List of ISO 639 language codes](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) and [List of ISO 3166 country codes](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes) for language and country codes. Note that Google doesn't support all locales/languages. - **Recalculation Interval**: Enter the desired recalculation interval for the spreadsheet functions. This affects how often `NOW`, `TODAY`, `RAND`, and `RANDBETWEEN` are updated. Select **On Change** for recalculating whenever there is a change in the spreadsheet, **Minute** for recalculating every minute, or **Hour** for recalculating every hour. Refer to [Set a spreadsheet’s location & calculation settings](https://support.google.com/docs/answer/58515) for more information about these options. Refer to the [Method: spreadsheets.create | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/create) API documentation for more information. ## Delete a spreadsheet Use this operation to delete an existing spreadsheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Document**. - **Operation**: Select **Delete**. - **Document**: Choose a spreadsheet you want to delete. - Select **From list** to choose the title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. Refer to the [Method: files.delete | Google Drive](https://developers.google.com/drive/api/reference/rest/v2/files/delete) API documentation for more information. # Google Sheets Sheet Within Document operations Use this operation to create, update, clear or delete a sheet in a Google spreadsheet from Google Sheets. Refer to [Google Sheets](../) for more information on the Google Sheets node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Append or Update Row Use this operation to update an existing row or add a new row at the end of the data if a matching entry isn't found in a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Append or Update Row**. - **Document**: Choose a spreadsheet that contains the sheet you want to append or update row(s) to. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose a sheet you want to append or update row(s) to. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Mapping Column Mode**: - **Map Each Column Manually**: Enter **Values to Send** for each column. - **Map Automatically**: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an [Edit Fields](../../../core-nodes/n8n-nodes-base.set/) node before this node to change them if required.) - **Nothing**: Don't map any data. ### Options - **Cell Format**: Use this option to choose how to format the data in cells. Refer to [Google Sheets API | CellFormat](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#CellFormat) for more information. - **Let Google Sheets format** (default): n8n formats text and numbers in the cells according to Google Sheets' default settings. - **Let n8n format**: New cells in your sheet will have the same data types as the input data provided by n8n. - **Data Location on Sheet**: Use this option when you need to specify the data range on your sheet. - **Header Row**: Specify the row index that contains the column headers. - **First Data Row**: Specify the row index where the actual data starts. - **Handling extra fields in input**: When using **Mapping Column Mode > Map Automatically**, use this option to decide how to handle fields in the input data that don't match any existing columns in the sheet. - **Insert in New Column(s)** (default): Adds new columns for any extra data. - **Ignore Them**: Ignores extra data that don't match the existing columns. - **Error**: Throws an error and stops execution. - **Use Append**: Turn on this option to use the [Google API append endpoint](https://developers.google.com/sheets/api/guides/values#append_values) for adding new data rows. - By default, n8n appends empty rows or columns and then adds the new data. This approach can ensure data alignment but may be less efficient. Using the append endpoint can lead to better performance by minimizing the number of API calls and simplifying the process. But if the existing sheet data has inconsistencies such as gaps or breaks between rows and columns, n8n may add the new data in the wrong place, leading to misalignment issues. - Use this option when performance is a priority and the data structure in the sheet is consistent without gaps. Refer to the [Method: spreadsheets.values.update | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/update) API documentation for more information. ## Append Row Use this operation to append a new row at the end of the data in a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Append Row**. - **Document**: Choose a spreadsheet with the sheet you want to append a row to. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose a sheet you want to append a row to. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Mapping Column Mode**: - **Map Each Column Manually**: Select the **Column to Match On** when finding the rows to update. Enter **Values to Send** for each column. - **Map Automatically**: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an [Edit Fields](../../../core-nodes/n8n-nodes-base.set/) node before this node to change them if required.) - **Nothing**: Don't map any data. ### Options - **Cell Format**: Use this option to choose how to format the data in cells. Refer to [Google Sheets API | CellFormat](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#CellFormat) for more information. - **Let Google Sheets format** (default): n8n formats text and numbers in the cells according to Google Sheets' default settings. - **Let n8n format**: New cells in your sheet will have the same data types as the input data provided by n8n. - **Data Location on Sheet**: Use this option when you need to specify the data range on your sheet. - **Header Row**: Specify the row index that contains the column headers. - **First Data Row**: Specify the row index where the actual data starts. - **Handling extra fields in input**: When using **Mapping Column Mode > Map Automatically**, use this option to decide how to handle fields in the input data that don't match any existing columns in the sheet. - **Insert in New Column(s)** (default): Adds new columns for any extra data. - **Ignore Them**: Ignores extra data that don't match the existing columns. - **Error**: Throws an error and stops execution. - **Use Append**: Turn on this option to use the [Google API append endpoint](https://developers.google.com/sheets/api/guides/values#append_values) for adding new data rows. - By default, n8n appends empty rows or columns and then adds the new data. This approach can ensure data alignment but may be less efficient. Using the append endpoint can lead to better performance by minimizing the number of API calls and simplifying the process. But if the existing sheet data has inconsistencies such as gaps or breaks between rows and columns, n8n may add the new data in the wrong place, leading to misalignment issues. - Use this option when performance is a priority and the data structure in the sheet is consistent without gaps. Refer to the [Method: spreadsheets.values.append | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/append) API documentation for more information. ## Clear a sheet Use this operation to clear all data from a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Clear**. - **Document**: Choose a spreadsheet with the sheet you want to clear data from. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose a sheet you want to clear data from. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Clear**: Select what data you want cleared from the sheet. - **Whole Sheet**: Clear the entire sheet's data. Turn on **Keep First Row** to keep the first row of the sheet. - **Specific Rows**: Clear data from specific rows. Also enter: - **Start Row Number**: Enter the first row number you want to clear. - **Number of Rows to Delete**: Enter the number of rows to clear. `1` clears data only the row in the **Start Row Number**. - **Specific Columns**: Clear data from specific columns. Also enter: - **Start Column**: Enter the first column you want to clear using the letter notation. - **Number of Columns to Delete**: Enter the number of columns to clear. `1` clears data only in the **Start Column**. - **Specific Range**: Enter the table range to clear data from, in [A1 notation](https://developers.google.com/sheets/api/guides/concepts#cell). Refer to the [Method: spreadsheets.values.clear | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets.values/clear) API documentation for more information. ## Create a new sheet Use this operation to create a new sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Create**. - **Document**: Choose a spreadsheet in which you want to create a new sheet. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Title**: Enter the title for your new sheet. ### Options - **Hidden**: Turn on this option to keep the sheet hidden in the UI. - **Right To Left**: Turn on this option to use RTL sheet instead of an LTR sheet. - **Sheet ID**: Enter the ID of the sheet. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId` - **Sheet Index**: By default, the new sheet is the last sheet in the spreadsheet. To override this behavior, enter the index you want the new sheet to use. When you add a sheet at a given index, Google increments the indices for all following sheets. Refer to [Sheets | SheetProperties](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/sheets#SheetProperties) documentation for more information. - **Tab Color**: Enter the color as hex code or use the color picker to set the color of the tab in the UI. Refer to the [Method: spreadsheets.batchUpdate | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate) API documentation for more information. ## Delete a sheet Use this operation to permanently delete a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Delete**. - **Document**: Choose a spreadsheet that contains the sheet you want to delete. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose the sheet you want to delete. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the name of the sheet. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. Refer to the [Method: spreadsheets.batchUpdate | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate) API documentation for more information. ## Delete Rows or Columns Use this operation to delete rows or columns in a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Delete Rows or Columns**. - **Document**: Choose a spreadsheet that contains the sheet you want to delete rows or columns from. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose the sheet in which you want to delete rows or columns. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the name of the sheet. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Start Row Number** or **Start Column**: Enter the row number or column letter to start deleting. - **Number of Rows to Delete** or **Number of Columns to delete**: Enter the number of rows or columns to delete. Refer to the [Method: spreadsheets.batchUpdate | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate) API documentation for more information. ## Get Row(s) Use this operation to read one or more rows from a sheet. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Get Row(s)**. - **Document**: Choose a spreadsheet that contains the sheet you want to get rows from. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose a sheet you want to read rows from. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the name of the sheet. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Filters**: By default, the node returns all rows in the sheet. Set filters to return a limited set of results: - **Column**: Select the column in your sheet to search against. - **Value**: Enter a cell value to search for. You can drag input data parameters here. If your filter matches multiple rows, n8n returns the first result. If you want all matching rows: 1. Under **Options**, select **Add Option** > **When Filter Has Multiple Matches**. 1. Change **When Filter Has Multiple Matches** to **Return All Matches**. ### Options - **Data Location on Sheet**: Use this option to specify a data range. By default, n8n will detect the range automatically until the last row in the sheet. - **Output Formatting**: Use this option to choose how n8n formats the data returned by Google Sheets. - **General Formatting**: - **Values (unformatted)** (default): n8n removes currency signs and other special formatting. Data type remains as number. - **Values (formatted)**: n8n displays the values as they appear in Google Sheets (for example, retaining commas or currency signs) by converting the data type from number to string. - **Formulas**: n8n returns the formula. It doesn't calculate the formula output. For example, if a cell B2 has the formula `=A2`, n8n returns B2's value as `=A2` (in text). Refer to [About date & time values | Google Sheets](https://developers.google.com/sheets/api/guides/formats#about_date_time_values) for more information. - **Date Formatting**: Refer to [DateTimeRenderOption | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/DateTimeRenderOption) for more information. - **Formatted Text** (default): As displayed in Google Sheets, which depends on the spreadsheet locale. For example `01/01/2024`. - **Serial Number**: Number of days since December 30th 1899. - **When Filter Has Multiple Matches**: Set to **Return All Matches** to get multiple matches. By default only the first result gets returned. First row n8n treats the first row in a Google Sheet as a heading row, and doesn't return it when reading all rows. If you want to read the first row, use the **Options** to set **Data Location on Sheet**. Refer to the [Method: spreadsheets.batchUpdate | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate) API documentation for more information. ## Update Row Use this operation to update existing row in a sheet. This operation only updates existing rows. To append rows when a matching entry isn't found in a sheet, use **Append or Update Row** operation instead. Enter these parameters: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../../credentials/google/). - **Resource**: Select **Sheet Within Document**. - **Operation**: Select **Update Row**. - **Document**: Choose a spreadsheet with the sheet you want to update. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet**: Choose a sheet you want to update. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Mapping Column Mode**: - **Map Each Column Manually**: Enter **Values to Send** for each column. - **Map Automatically**: n8n looks for incoming data that matches the columns in Google Sheets automatically. In this mode, make sure the incoming data fields are the same as the columns in Google Sheets. (Use an [Edit Fields](../../../core-nodes/n8n-nodes-base.set/) node before this node to change them if required.) - **Nothing**: Don't map any data. ### Options - **Cell Format**: Use this option to choose how to format the data in cells. Refer to [Google Sheets API | CellFormat](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/cells#CellFormat) for more information. - **Let Google Sheets format** (default): n8n formats text and numbers in the cells according to Google Sheets' default settings. - **Let n8n format**: New cells in your sheet will have the same data types as the input data provided by n8n. - **Data Location on Sheet**: Use this option when you need to specify where the data range on your sheet. - **Header Row**: Specify the row index that contains the column headers. - **First Data Row**: Specify the row index where the actual data starts. Refer to the [Method: spreadsheets.batchUpdate | Google Sheets](https://developers.google.com/sheets/api/reference/rest/v4/spreadsheets/batchUpdate) API documentation for more information. # MySQL node Use the MySQL node to automate work in MySQL, and integrate MySQL with other applications. n8n has built-in support for a wide range of MySQL features, including executing an SQL query, as well as inserting, and updating rows in a database. On this page, you'll find a list of operations the MySQL node supports and links to more resources. Credentials Refer to [MySQL credentials](../../credentials/mysql/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Delete - Execute SQL - Insert - Insert or Update - Select - Update ## Templates and examples **Generate SQL queries from schema only - AI-powered** by Yulia [View template details](https://n8n.io/workflows/2508-generate-sql-queries-from-schema-only-ai-powered/) **Generate Monthly Financial Reports with Gemini AI, SQL, and Outlook** by Amjid Ali [View template details](https://n8n.io/workflows/3617-generate-monthly-financial-reports-with-gemini-ai-sql-and-outlook/) **Import CSV into MySQL** by Eduard [View template details](https://n8n.io/workflows/1839-import-csv-into-mysql/) [Browse MySQL integration templates](https://n8n.io/integrations/mysql/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [MySQL's Connectors and APIs documentation](https://dev.mysql.com/doc/index-connectors.html) for more information about the service. Refer to MySQL's [SELECT statement documentation](https://dev.mysql.com/doc/refman/8.4/en/select.html) for more information on writing SQL queries. ## Use query parameters When creating a query to run on a MySQL database, you can use the **Query Parameters** field in the **Options** section to load data into the query. n8n sanitizes data in query parameters, which prevents SQL injection. For example, you want to find a person by their email address. Given the following input data: ``` [ { "email": "alex@example.com", "name": "Alex", "age": 21 }, { "email": "jamie@example.com", "name": "Jamie", "age": 33 } ] ``` You can write a query like: ``` SELECT * FROM $1:name WHERE email = $2; ``` Then in **Query Parameters**, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the email address from each input item in turn: ``` // users is an example table name users, {{ $json.email }} ``` ## Common issues For common errors or issues and suggested resolution steps, refer to [Common issues](common-issues/). # MySQL node common issues Here are some common errors and issues with the [MySQL node](../) and steps to resolve or troubleshoot them. ## Update rows by composite key The MySQL node's **Update** operation lets you to update rows in a table by providing a **Column to Match On** and a value. This works for tables where single column values can uniquely identify individual rows. You can't use this pattern for tables that use [composite keys](https://en.wikipedia.org/wiki/Composite_key), where you need multiple columns to uniquely identify a row. A example of this is MySQL's [`user` table](https://mariadb.com/kb/en/mysql-user-table/) in the `mysql` database, where you need both the `user` and `host` columns to uniquely identify rows. To update tables with composite keys, write the query manually with the **Execute SQL** operation instead. There, you can match on multiple values, like in this example which matches on both `customer_id` and `product_id`: ``` UPDATE orders SET quantity = 3 WHERE customer_id = 538 AND product_id = 800; ``` ## Can't connect to a local MySQL server when using Docker When you run either n8n or MySQL in Docker, you need to configure the network so that n8n can connect to MySQL. The solution depends on how you're hosting the two components. ### If only MySQL is in Docker If only MySQL is running in Docker, configure MySQL to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). When running the container, [publish the port](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/) with the `-p` flag. By default, MySQL runs on port 3306, so your Docker command should look like this: ``` docker run -p 3306:3306 --name my-mysql -d mysql:latest ``` When configuring [MySQL credentials](../../../credentials/mysql/), the `localhost` address should work without a problem (set the **Host** to `localhost`). ### If only n8n is in Docker If only n8n is running in Docker, configure MySQL to listen on all interfaces by binding to `0.0.0.0` on the host. If you are running n8n in Docker on **Linux**, use the `--add-host` flag to map `host.docker.internal` to `host-gateway` when you start the container. For example: ``` docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n ``` If you are using Docker Desktop, this is automatically configured for you. When configuring [MySQL credentials](../../../credentials/mysql/), use `host.docker.internal` as the **Host** address instead of `localhost`. ### If MySQL and n8n are running in separate Docker containers If both n8n and MySQL are running in Docker in separate containers, you can use Docker networking to connect them. Configure MySQL to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). Add both the MySQL and n8n containers to the same [user-defined bridge network](https://docs.docker.com/engine/network/drivers/bridge/). When configuring [MySQL credentials](../../../credentials/mysql/), use the MySQL container's name as the host address instead of `localhost`. For example, if you call the MySQL container `my-mysql`, you would set the **Host** to `my-mysql`. ### If MySQL and n8n are running in the same Docker container If MySQL and n8n are running in the same Docker container, the `localhost` address doesn't need any special configuration. You can configure MySQL to listen on `localhost` and configure the **Host** in the [MySQL credentials in n8n](../../../credentials/ollama/) to use `localhost`. ## Decimal numbers returned as strings By default, the MySQL node returns [`DECIMAL` values](https://dev.mysql.com/doc/refman/8.4/en/fixed-point-types.html) as strings. This is done intentionally to avoid loss of precision that can occur due to limitation with the way JavaScript represents numbers. You can learn more about the decision in the documentation for the [MySQL library](https://sidorares.github.io/node-mysql2/docs/api-and-configurations) that n8n uses. To output decimal values as numbers instead of strings and ignore the risks in loss of precision, enable the **Output Decimals as Numbers** option. This will output the values as numbers instead of strings. As an alternative, you can manually convert from the string to a decimal using the `toFloat()` function with [`toFixed()`](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Number/toFixed) or with the [Edit Fields (Set) node](../../../core-nodes/n8n-nodes-base.set/) after the MySQL node. Be aware that you may still need to account for a potential loss of precision. # Notion node Use the Notion node to automate work in Notion, and integrate Notion with other applications. n8n has built-in support for a wide range of Notion features, including getting and searching databases, creating pages, and getting users. On this page, you'll find a list of operations the Notion node supports and links to more resources. Credentials Refer to [Notion credentials](../../credentials/notion/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Block - Append After - Get Child Blocks - Database - Get - Get Many - Search - Database Page - Create - Get - Get Many - Update - Page - Archive - Create - Search - User - Get - Get Many ## Templates and examples **Transcribe Audio Files, Summarize with GPT-4, and Store in Notion** by Pat [View template details](https://n8n.io/workflows/2178-transcribe-audio-files-summarize-with-gpt-4-and-store-in-notion/) **Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3** by Jimleuk [View template details](https://n8n.io/workflows/2878-host-your-own-ai-deep-research-agent-with-n8n-apify-and-openai-o3/) **Notion AI Assistant Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2415-notion-ai-assistant-generator/) [Browse Notion integration templates](https://n8n.io/integrations/notion/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides an app node for Notion. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.notiontrigger/). Refer to [Notion's documentation](https://developers.notion.com/) for details about their API. ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common issues](common-issues/). # Notion node common issues Here are some common errors and issues with the [Notion node](../) and steps to resolve or troubleshoot them. ## Relation property not displaying The Notion node only supports displaying the data relation property for [two-way relations](https://www.notion.com/help/relations-and-rollups). When you connect two Notion databases with a two-way relationship, you can select or filter by the relation property when working with the Notion node's **Database Page** resource. To enable two-way relations, edit the relation property in Notion and enable the **Show on [name of related database]** option to create a reverse relation. Select a name to use for the relation in the new context. The relation is now accessible in n8n when filtering or selecting. If you need to work with Notion databases with one-way relationship, you can use the [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) with your existing Notion credentials. For example, to update a one-way relationship, you can send a `PATCH` request to the following URL: ``` https://api.notion.com/v1/pages/ ``` Enable **Send Body**, set the **Body Content Type** to **JSON**, and set **Specify Body** to **Using JSON**. Afterward, you can enter a JSON object like the following into the **JSON** field: ``` { "properties": { "Account": { "relation": [ { "id": "" } ] } } } ``` ## Create toggle heading The Notion node allows you to create headings and toggles when adding blocks to **Page**, **Database Page**, or **Block** resources. Creating toggleable headings isn't yet supported by the Notion node itself. You can work around this be creating a regular heading and then modifying it to enable the [`is_toggleable` property](https://developers.notion.com/reference/block#headings): 1. Add a heading with Notion node. 1. Select the resource you want to add a heading to: - To add a new page with a heading, select the **Page** or **Database Page** resources with the **Create** operation. - To add a heading to an existing page, select the **Block** resource with the **Append After** operation. 1. Select **Add Block** and set the **Type Name or ID** to either **Heading 1**, **Heading 2**, or **Heading 3**. 1. Add an [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) node connected to the Notion node and select the `GET` method. 1. Set the **URL** to `https://api.notion.com/v1/blocks/`. For example, if your added the heading to an existing page, you could use the following URL: `https://api.notion.com/v1/blocks/{{ $json.results[0].id }}`. If you created a new page instead of appending a block, you may need to discover the block ID by querying the page contents first. 1. Select **Predefined Credential Type** and connect your existing Notion credentials. 1. Add an [Edit Fields (Set)](../../../core-nodes/n8n-nodes-base.set/) node after the HTTP Request node. 1. Add `heading_1.is_toggleable` as a new **Boolean** field set to `true`. Swap `heading_1` for a different heading number as necessary. 1. Add a second HTTP Request node after the Edit Fields (Set) node. 1. Set the **Method** to `PATCH` and use `https://api.notion.com/v1/blocks/{{ $json.id }}` as the **URL** value. 1. Select **Predefined Credential Type** and connect your existing Notion credentials. 1. Enable **Send Body** and set a parameter. 1. Set the parameter **Name** to `heading_1` (substitute `heading_1` for the heading level you are using). 1. Set the parameter **Value** to `{{ $json.heading_1 }}` (substitute `heading_1` for the heading level you are using). The above sequence will create a regular heading block. It will query the newly created header, add the `is_toggleable` property, and update the heading block. ## Handle null and empty values You may receive a validation error when working with the Notion node if you submit fields with empty or null values. This can occur any time you populate fields from previous nodes when that data is missing. To work around this, check for the existence of the field data before sending it to Notion or use a default value. To check for the data before executing the Notion node, use an [If](../../../core-nodes/n8n-nodes-base.if/) node to check whether the field is unset. This allows you to use the [Edit Fields (Set)](../../../core-nodes/n8n-nodes-base.set/) node to conditionally remove the field when it doesn't have a valid value. As an alternative, you can set a default value if the incoming data doesn't provide one. # Oracle Database node Use the Oracle Database node to automate work in Oracle Database, and integrate Oracle Database with other applications. n8n has built-in support for a wide range of Oracle Database features which includes executing an SQL statement, fetching, inserting, updating or deleting data from Oracle Database. This node uses the [node-oracledb driver](https://github.com/oracle/node-oracledb) internally. On this page, you'll find a list of operations the Oracle Database node supports and links to more resources. Note Refer to [Oracle Database credentials](../../credentials/oracledb/) for guidance on setting up authentication. Requires Oracle Database **19c or later**. For advanced Oracle Database features like Transparent Application Continuity (TAC) and Sharding, also requires Oracle Client Libraries **19c or later**. ## Operations - [**Delete**](#delete): Delete an entire table or rows in a table - [**Execute SQL**](#execute-sql): Execute an SQL statement - [**Insert**](#insert): Insert rows in a table - [**Insert or Update**](#insert-or-update): Insert or update rows in a table - [**Select**](#select): Select rows from a table - [**Update**](#update): Update rows in a table ### Delete Use this operation to delete an entire table or rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Select **Delete**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list, or select **By Name** to enter the table name. - **Command**: The deletion action to take: - **Truncate**: Removes the table's data but preserves the table's structure. - **Delete**: Delete the rows that match the "Select Rows" condition. If you don't select anything, Oracle Database deletes all rows. - **Select Rows**: Define a **Column**, **Operator**, and **Value** to match rows on. The value can be passed as JSON using expression or string. - **Combine Conditions**: How to combine the conditions in "Select Rows". The **AND** requires all conditions to be true, while **OR** requires at least one condition to be true. - **Drop**: Deletes the table's data and structure permanently. #### Delete options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Statement Batching**: The way to send statements to the database: - **Single Statement**: A single statement for all incoming items. - **Independently**: Execute one statement per incoming item of the execution. - **Transaction**: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes. ### Execute SQL Use this operation to execute an SQL statement. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Execute SQL **Execute SQL**. - **Statement**: The SQL statement to execute. You can use n8n [expressions](../../../../data/expressions/) and positional parameters like `:1`, `:2`, or named parameters like `:name`, `:id` to use with [Use bind parameters](#use-bind-parameters). To run a PL/SQL procedure, for example `demo`, you can use: ``` BEGIN demo; END; ``` #### Execute Statement options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Bind Variable Placeholder Values**: Enter the values for the bind parameters used in the statement [Use bind parameters](#use-bind-parameters). - **Output Numbers As String**: Indicates if the numbers should be retrieved as a String. - **Fetch Array Size**: This property is a number that sets the size of an internal buffer used for fetching query rows from Oracle Database. Changing it may affect query performance but does not affect how many rows are returned to the application. - **Number of Rows to Prefetch**: This property is a query tuning option to set the number of additional rows the underlying Oracle driver fetches during the internal initial statement execution phase of a query. ### Insert Use this operation to insert rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Select **Insert**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list, or select **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column [Use n8n expressions for bind values](#use-n8n-expressions-for-bind-values). - **Map Automatically**: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Insert options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Statement Batching**: The way to send statements to the database: - **Single Statement**: A single statement for all incoming items. - **Independently**: Execute one statement per incoming item of the execution. - **Transaction**: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes. ### Insert or Update Use this operation to insert or update rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Select **Insert or Update**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list, or select **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column [Use n8n expressions for bind values](#use-n8n-expressions-for-bind-values). - **Map Automatically**: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Insert or Update options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Statement Batching**: The way to send statements to the database: - **Single Statement**: A single statement for all incoming items. - **Independently**: Execute one statement per incoming item of the execution. - **Transaction**: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes. ### Select Use this operation to select rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Select **Select**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list, or select **By Name** to enter the table name. - **Return All**: Whether to return all results or only up to a given limit. - **Limit**: The maximum number of items to return when **Return All** is disabled. - **Select Rows**: Set the conditions to select rows. Define a **Column**, **Operator**, and **Value**(as `json`) to match rows on. The **Value** can vary by type — for example with Fixed mode: - String: "hello", hellowithoutquotes, "hello with space" - Number: 12 - JSON: { "key": "val" } If you don't select anything, Oracle Database selects all rows. - **Combine Conditions**: How to combine the conditions in **Select Rows**. The **AND** requires all conditions to be true, while **OR** requires at least one condition to be true. - **Sort**: Choose how to sort the selected rows. Choose a **Column** from a list or by ID and a sort **Direction**. #### Select options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Output Numbers As String**: Indicates if the numbers should be retrieved as a String. - **Fetch Array Size**: This property is a number that sets the size of an internal buffer used for fetching query rows from Oracle Database. Changing it may affect query performance but does not affect how many rows are returned to the application. - **Number of Rows to Prefetch**: This property is a query tuning option to set the number of additional rows the underlying Oracle driver fetches during the internal initial statement execution phase of a query. ### Update Use this operation to update rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Oracle Database credential](../../credentials/oracledb/). - **Operation**: Select **Update**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list, or select **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column [Use n8n expressions for bind values](#use-n8n-expressions-for-bind-values). - **Map Automatically**: Automatically map incoming data to matching column names in Oracle Database. The incoming data field names must match the column names in Oracle Database for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Update options - **Auto Commit**: When this property is set to true, the transaction in the current connection is automatically committed at the end of statement execution. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Statement Batching**: The way to send statements to the database: - **Single Statement**: A single statement for all incoming items. - **Independently**: Execute one statement per incoming item of the execution. - **Transaction**: Execute all statements in a transaction. If a failure occurs, Oracle Database rolls back all changes. ## Related resources Refer to [SQL Language Reference](https://www.oracle.com/pls/topic/lookup?ctx=dblatest&id=SQLRF) for more information about the service. Refer to [node-oracledb documentation](https://node-oracledb.readthedocs.io/en/latest/) for more information about the node-oracledb driver. ## Use bind parameters When creating a statement to run on an Oracle database instance, you can use the **Bind Variable Placeholder Values** field in the **Options** section to load data into the statement. n8n sanitizes data in statement parameters, which prevents SQL injection. For example, you would want to find specific fruits by their color. Given the following input data: ``` [ { "FRUIT_ID": 1, "FRUIT_NAME": "Apple", "COLOR": "Red" }, { "FRUIT_ID": 2, "FRUIT_NAME": "Banana", "COLOR": "Yellow" } ] ``` You can write a statement like: ``` SELECT * FROM FRUITS WHERE COLOR = :col ``` Then in **Bind Variable Placeholder Values**, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the color from each input item in turn: ``` // fruits is an example table name fruits, {{ $json.color }} ``` ## Use n8n Expressions for bind values For **Values to Send**, you can provide inputs using n8n Expressions. Below are examples for different data types — you can either enter constant values or reference fields from previous items (`$json`): ### JSON - Constant: `{{ { k1: "v1", k2: "v2" } }}` - From a previous item: `{{ $json.COL_JSON }}` ### VECTOR - Constant: `{{ [1, 2, 3, 4.5] }}` - From a previous item: `{{ $json.COL_VECTOR }}` ### BLOB - Constant: `{{ [94, 87, 34] }}` or `{{ ' BLOB data string' }}` - From a previous item: `{{ $json.COL_BLOB }}` ### RAW - Constant: `{{ [94, 87, 34] }}` - From a previous item: `{{ $json.COL_RAW }}` ### BOOLEAN - Constant: `{{ true }}` - From a previous item: `{{ $json.COL_BOOLEAN }}` ### NUMBER - Constant: `1234` - From a previous item: `{{ $json.COL_NUMBER }}` ## VARCHAR - Constant: `' Hello World '` - From a previous item: `{{ $json.COL_CHAR }}` These examples assume JSON keys (e.g. `COL_JSON, COL_VECTOR`) map directly to the respective SQL column types. # Postgres node Use the Postgres node to automate work in Postgres, and integrate Postgres with other applications. n8n has built-in support for a wide range of Postgres features, including executing queries, as well as inserting and updating rows in a database. On this page, you'll find a list of operations the Postgres node supports and links to more resources. Credentials Refer to [Postgres credentials](../../credentials/postgres/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - [**Delete**](#delete): Delete an entire table or rows in a table - [**Execute Query**](#execute-query): Execute an SQL query - [**Insert**](#insert): Insert rows in a table - [**Insert or Update**](#insert-or-update): Insert or update rows in a table - [**Select**](#select): Select rows from a table - [**Update**](#update): Update rows in a table ### Delete Use this operation to delete an entire table or rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Delete**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list or **By Name** to enter the table name. - **Command**: The deletion action to take: - **Truncate**: Removes the table's data but preserves the table's structure. - **Restart Sequences**: Whether to reset auto increment columns to their initial values as part of the Truncate process. - **Delete**: Delete the rows that match the "Select Rows" condition. If you don't select anything, Postgres deletes all rows. - **Select Rows**: Define a **Column**, **Operator**, and **Value** to match rows on. - **Combine Conditions**: How to combine the conditions in "Select Rows". **AND** requires all conditions to be true, while **OR** requires at least one condition to be true. - **Drop**: Deletes the table's data and structure permanently. #### Delete options - **Cascade**: Whether to also drop all objects that depend on the table, like views and sequences. Available if using **Truncate** or **Drop** commands. - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. ### Execute Query Use this operation to execute an SQL query. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Execute Query**. - **Query**: The SQL query to execute. You can use n8n [expressions](../../../../data/expressions/) and tokens like `$1`, `$2`, and `$3` to build [prepared statements](https://www.postgresql.org/docs/current/sql-prepare.html) to use with [query parameters](#use-query-parameters). #### Execute Query options - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Query Parameters**: A comma-separated list of values that you want to use as [query parameters](#use-query-parameters). - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. - **Replace Empty Strings with NULL**: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software. ### Insert Use this operation to insert rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Insert**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list or **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column. - **Map Automatically**: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Insert options - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. - **Skip on Conflict**: Whether to skip the row if the insert violates a unique or exclusion constraint instead of throwing an error. - **Replace Empty Strings with NULL**: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software. ### Insert or Update Use this operation to insert or update rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Insert or Update**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list or **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column. - **Map Automatically**: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Insert or Update options - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. - **Replace Empty Strings with NULL**: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software. ### Select Use this operation to select rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Select**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list or **By Name** to enter the table name. - **Return All**: Whether to return all results or only up to a given limit. - **Limit**: The maximum number of items to return when **Return All** is disabled. - **Select Rows**: Set the conditions to select rows. Define a **Column**, **Operator**, and **Value** to match rows on. If you don't select anything, Postgres selects all rows. - **Combine Conditions**: How to combine the conditions in **Select Rows**. **AND** requires all conditions to be true, while **OR** requires at least one condition to be true. - **Sort**: Choose how to sort the selected rows. Choose a **Column** from a list or by ID and a sort **Direction**. #### Select options - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. ### Update Use this operation to update rows in a table. Enter these parameters: - **Credential to connect with**: Create or select an existing [Postgres credential](../../credentials/postgres/). - **Operation**: Select **Update**. - **Schema**: Choose the schema that contains the table you want to work on. Select **From list** to choose the schema from the dropdown list or **By Name** to enter the schema name. - **Table**: Choose the table that you want to work on. Select **From list** to choose the table from the dropdown list or **By Name** to enter the table name. - **Mapping Column Mode**: How to map column names to incoming data: - **Map Each Column Manually**: Select the values to use for each column. - **Map Automatically**: Automatically map incoming data to matching column names in Postgres. The incoming data field names must match the column names in Postgres for this to work. If necessary, consider using the [edit fields (set) node](../../core-nodes/n8n-nodes-base.set/) before this node to adjust the format as needed. #### Update options - **Connection Timeout**: The number of seconds to try to connect to the database. - **Delay Closing Idle Connection**: The number of seconds to wait before considering idle connections eligible for closing. - **Query Batching**: The way to send queries to the database: - **Single Query**: A single query for all incoming items. - **Independently**: Execute one query per incoming item of the execution. - **Transaction**: Execute all queries in a transaction. If a failure occurs, Postgres rolls back all changes. - **Output Columns**: Choose which columns to output. You can select from a list of available columns or specify IDs using [expressions](../../../../data/expressions/). - **Output Large-Format Numbers As**: The format to output `NUMERIC` and `BIGINT` columns as: - **Numbers**: Use this for standard numbers. - **Text**: Use this if you expect numbers longer than 16 digits. Without this, numbers may be incorrect. - **Replace Empty Strings with NULL**: Whether to replace empty strings with NULL in input. This may be useful when working with data exported from spreadsheet software. ## Templates and examples **Chat with Postgresql Database** by KumoHQ [View template details](https://n8n.io/workflows/2859-chat-with-postgresql-database/) **Generate Instagram Content from Top Trends with AI Image Generation** by mustafa kendigüzel [View template details](https://n8n.io/workflows/2803-generate-instagram-content-from-top-trends-with-ai-image-generation/) **AI Customer Support Assistant · WhatsApp Ready · Works for Any Business** by Matt F. [View template details](https://n8n.io/workflows/3859-ai-customer-support-assistant-whatsapp-ready-works-for-any-business/) [Browse Postgres integration templates](https://n8n.io/integrations/postgres/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n provides a trigger node for Postgres. You can find the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.postgrestrigger/). ## Use query parameters When creating a query to run on a Postgres database, you can use the **Query Parameters** field in the **Options** section to load data into the query. n8n sanitizes data in query parameters, which prevents SQL injection. For example, you want to find a person by their email address. Given the following input data: ``` [ { "email": "alex@example.com", "name": "Alex", "age": 21 }, { "email": "jamie@example.com", "name": "Jamie", "age": 33 } ] ``` You can write a query like: ``` SELECT * FROM $1:name WHERE email = $2; ``` Then in **Query Parameters**, provide the field values to use. You can provide fixed values or expressions. For this example, use expressions so the node can pull the email address from each input item in turn: ``` // users is an example table name {{ [ 'users', $json.email ] }} ``` ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). # Postgres node common issues Here are some common errors and issues with the [Postgres node](../) and steps to resolve or troubleshoot them. ## Dynamically populate SQL `IN` groups with parameters In Postgres, you can use the SQL [`IN` comparison construct](https://www.postgresql.org/docs/current/functions-comparisons.html#FUNCTIONS-COMPARISONS-IN-SCALAR) to make comparisons between groups of values: ``` SELECT color, shirt_size FROM shirts WHERE shirt_size IN ('small', 'medium', 'large'); ``` While you can use n8n [expressions](../../../../../data/expressions/) in your query to dynamically populate the values in an `IN` group, combining this with [query parameters](../#use-query-parameters) provides extra protection by automatically sanitizing input. To construct an `IN` group query with query parameters: 1. Set the **Operation** to **Execute Query**. 1. In **Options**, select **Query Parameters**. 1. Use an expression to select an array from the input data. For example, `{{ $json.input_shirt_sizes }}`. 1. In the **Query** parameter, write your query with the `IN` construct with an empty set of parentheses. For example: ``` SELECT color, shirt_size FROM shirts WHERE shirt_size IN (); ``` 1. Inside of the `IN` parentheses, use an expression to dynamically create index-based placeholders (like `$1`, `$2`, and `$3`) for the number of items in your query parameter array. You can do this by increasing each array index by one since the placeholder variables are 1 indexed: ``` SELECT color, shirt_size FROM shirts WHERE shirt_size IN ({{ $json.input_shirt_sizes.map((i, pos) => "$" + (pos+1)).join(', ') }}); ``` With this technique, n8n automatically creates the correct number of [prepared statement placeholders](https://www.postgresql.org/docs/current/sql-prepare.html) for the `IN` values according to the number of items in your array. ## Working with timestamps and time zones To avoid complications with how n8n and Postgres interpret timestamp and time zone data, follow these general tips: - **Use UTC when storing and passing dates**: Using UTC helps avoid confusion over timezone conversions when converting dates between different representations and systems. - **Set the execution timezone**: Set the global timezone in n8n using either [environment variables](../../../../../hosting/configuration/configuration-examples/time-zone/) (for self-hosted) or in the [settings](../../../../../manage-cloud/set-cloud-timezone/) (for n8n Cloud). You can set a workflow-specific timezone in the [workflow settings](../../../../../workflows/settings/). - **Use ISO 8601 format**: The [ISO 8601 format](https://en.wikipedia.org/wiki/ISO_8601) encodes the day of the month, month, year, hour, minutes, and seconds in a standardized string. n8n passes dates between nodes as strings and uses [Luxon](../../../../../data/specific-data-types/luxon/) to parse dates. If you need to cast to ISO 8601 explicitly, you can use the [Date & Time node](../../../core-nodes/n8n-nodes-base.datetime/) and a custom format set to the string `yyyy-MM-dd'T'HH:mm:ss`. ## Outputting Date columns as date strings instead of ISO datetime strings n8n's uses the [`pg` package](https://www.npmjs.com/package/pg) to integrate with Postgres, which affects how n8n processes date, timestamp, and related types from Postgres. The `pg` package parses `DATE` values into `new Date(row_value)` by default, which produces a date that follows the [ISO 8601 datetime string](https://en.wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations) format. For example, a date of `2025-12-25` might produce a datetime sting of `2025-12-25T23:00:00.000Z` depending on the instance's timezone settings. To work around this, use the [Postgres `TO_CHAR` function](https://www.postgresql.org/docs/current/functions-formatting.html#FUNCTIONS-FORMATTING) to format the date into the expected format at query time: ``` SELECT TO_CHAR(date_col, 'YYYY-MM-DD') AS date_col_as_date FROM table_with_date_col ``` This will produce the date as a string without the time or timezone components. To continue the earlier example, with this casting, a date of `2025-12-25` would produce the string `2025-12-25`. You can find out more in the [`pg` package documentation on dates](https://node-postgres.com/features/types#date--timestamp--timestamptz). # Supabase node Use the Supabase node to automate work in Supabase, and integrate Supabase with other applications. n8n has built-in support for a wide range of Supabase features, including creating, deleting, and getting rows. On this page, you'll find a list of operations the Supabase node supports and links to more resources. Credentials Refer to [Supabase credentials](../../credentials/supabase/) for guidance on setting up authentication. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). ## Operations - Row - Create a new row - Delete a row - Get a row - Get all rows - Update a row ## Using custom schemas By default, the Supabase node only fetches the `public` schema. To fetch [custom schemas](https://supabase.com/docs/guides/api/using-custom-schemas), enable **Use Custom Schema**. In the new **Schema** field, provide the custom schema the Supabase node should use. ## Templates and examples **AI Agent To Chat With Files In Supabase Storage** by Mark Shcherbakov [View template details](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) **Autonomous AI crawler** by Oskar [View template details](https://n8n.io/workflows/2315-autonomous-ai-crawler/) **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) [Browse Supabase integration templates](https://n8n.io/integrations/supabase/), or [search all templates](https://n8n.io/workflows/) ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common issues](common-issues/). # Supabase node common issues Here are some common errors and issues with the [Supabase node](../) and steps to resolve or troubleshoot them. ## Filtering rows by metadata To filter rows by [Supabase metadata](https://supabase.com/docs/guides/ai/python/metadata), set the **Select Type** to **String**. From there, you can construct a query in the **Filters (String)** parameter to filter the metadata using the [Supabase metadata query language](https://supabase.com/docs/guides/ai/python/metadata#metadata-query-language), inspired by the [MongoDB selectors](https://www.mongodb.com/docs/manual/reference/operator/query/) format. Access the metadata properties using the [Postgres `->>` arrow JSON operator](https://www.postgresql.org/docs/current/functions-json.html#FUNCTIONS-JSON-PROCESSING) like this (curly brackets denote components to fill in): ``` metadata->>{your-property}={comparison-operator}.{comparison-value} ``` For example to access an `age` property in the metadata and return results greater than or equal to 21, you could enter the following in the **Filters (String)** field: ``` metadata->>age=gte.21 ``` You can combine these operators to construct more complex queries. ## Can't connect to a local Supabase database when using Docker When you run Supabase in Docker, you need to configure the network so that n8n can connect to Supabase. The solution depends on how you're hosting the two components. ### If only Supabase is in Docker If only Supabase is running in Docker, the Docker Compose file used by the [self-hosting guide](https://supabase.com/docs/guides/self-hosting/docker) already runs Supabase bound to the correct interfaces. When configuring [Supabase credentials](../../../credentials/supabase/), the `localhost` address should work without a problem (set the **Host** to `localhost`). ### If Supabase and n8n are running in separate Docker containers If both n8n and Supabase are running in Docker in separate containers, you can use Docker networking to connect them. Configure Supabase to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official [Docker compose configuration](https://supabase.com/docs/guides/self-hosting/docker) already does this this). Add both the Supabase and n8n components to the same [user-defined bridge network](https://docs.docker.com/engine/network/drivers/bridge/) if you aren't already managing them together in the same Docker Compose file. When configuring [Supabase credentials](../../../credentials/supabase/), use the Supabase API gateway container's name (`supabase-kong` by default) as the host address instead of `localhost`. For example, if you use the default configuration, you would set the **Host** to `http://supabase-kong:8000`. ## Records are accessible through Postgres but not Supabase If queries for records return empty using the Supabase node, but are available through the [Postgres](../../n8n-nodes-base.postgres/) node or with a Postgres client, there may be a conflict with Supabase's [Row Level Security (RLS)](https://supabase.com/docs/guides/database/postgres/row-level-security) policy. Supabase always enables RLS when you create a table in a public schema with the Table Editor. When RLS is active, the API doesn't return any data with the public `anon` key until you create policies. This is a security measure to ensure that you only expose data you intend to. To access data from a table with RLS enabled as the `anon` role, [create a policy](https://supabase.com/docs/guides/database/postgres/row-level-security#creating-policies) to enable the access patterns you intend to use. # Telegram node Use the Telegram node to automate work in [Telegram](https://telegram.org/) and integrate Telegram with other applications. n8n has built-in support for a wide range of Telegram features, including getting files as well as deleting and editing messages. On this page, you'll find a list of operations the Telegram node supports and links to more resources. Credentials Refer to [Telegram credentials](../../credentials/telegram/) for guidance on setting up authentication. ## Operations - [**Chat** operations](chat-operations/) - [**Get**](chat-operations/#get-chat) up-to-date information about a chat. - [**Get Administrators**](chat-operations/#get-administrators): Get a list of all administrators in a chat. - [**Get Member**](chat-operations/#get-chat-member): Get the details of a chat member. - [**Leave**](chat-operations/#leave-chat) a chat. - [**Set Description**](chat-operations/#set-description) of a chat. - [**Set Title**](chat-operations/#set-title) of a chat. - [**Callback** operations](callback-operations/) - [**Answer Query**](callback-operations/#answer-query): Send answers to callback queries sent from [inline keyboards](https://core.telegram.org/bots/features#inline-keyboards). - [**Answer Inline Query**](callback-operations/#answer-inline-query): Send answers to callback queries sent from inline queries. - [**File** operations](file-operations/) - [**Get File**](file-operations/#get-file) from Telegram. - [**Message** operations](message-operations/) - [**Delete Chat Message**](message-operations/#delete-chat-message). - [**Edit Message Text**](message-operations/#edit-message-text): Edit the text of an existing message. - [**Pin Chat Message**](message-operations/#pin-chat-message) for the chat. - [**Send Animation**](message-operations/#send-animation) to the chat. - For use with GIFs or H.264/MPEG-4 AVC videos without sound up to 50 MB in size. - [**Send Audio**](message-operations/#send-audio) file to the chat and display it in the music player. - [**Send Chat Action**](message-operations/#send-chat-action): Tell the user that something is happening on the bot's side. The status is set for 5 seconds or less. - [**Send Document**](message-operations/#send-document) to the chat. - [**Send Location**](message-operations/#send-location): Send a geolocation to the chat. - [**Send Media Group**](message-operations/#send-media-group): Send a group of photos and/or videos. - [**Send Message**](message-operations/#send-message) to the chat. - [**Send Photo**](message-operations/#send-photo) to the chat. - [**Send Sticker**](message-operations/#send-sticker) to the chat. - For use with static .WEBP, animated .TGS, or video .WEBM stickers. - [**Send Video**](message-operations/#send-video) to the chat. - [**Unpin Chat Message**](message-operations/#unpin-chat-message) from the chat. Add bot to channel To use most of the **Message** operations, you must add your bot to a channel so that it can send messages to that channel. Refer to [Common Issues | Add a bot to a Telegram channel](common-issues/#add-a-bot-to-a-telegram-channel) for more information. ## Templates and examples [Browse Telegram node documentation integration templates](https://n8n.io/integrations/telegram/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Telegram's API documentation](https://core.telegram.org/bots/api) for more information about the service. n8n provides a trigger node for Telegram. Refer to the trigger node docs [here](../../trigger-nodes/n8n-nodes-base.telegramtrigger/) for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # Telegram node Callback operations Use these operations to respond to callback queries sent from the in-line keyboard or in-line queries. Refer to [Telegram](../) for more information on the Telegram node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Answer Query Use this operation to send answers to callback queries sent from [inline keyboards](https://core.telegram.org/bots/features#inline-keyboards) using the Bot API [answerCallbackQuery](https://core.telegram.org/bots/api#answercallbackquery) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Callback**. - **Operation**: Select **Answer Query**. - **Query ID**: Enter the unique identifier of the query you want to answer. - To feed a Query ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node triggered on the **Callback Query**. - **Results**: Enter a JSON-serialized array of results you want to use as answers to the query. Refer to the Telegram [InlineQueryResults](https://core.telegram.org/bots/api#inlinequeryresult) documentation for more information on formatting your array. Refer to the Telegram Bot API [answerCallbackQuery](https://core.telegram.org/bots/api#answercallbackquery) documentation for more information. ### Answer Query additional fields Use the **Additional Fields** to further refine the behavior of the node. Select **Add Field** to add any of the following: - **Cache Time**: Enter the maximum amount of time in seconds that the client may cache the result of the callback query. Telegram defaults to `0` seconds for this method. - **Show Alert**: Telegram can display the answer as a notification at the top of the chat screen or as an alert. Choose whether you want to keep the default notification display (turned off) or display the answer as an alert (turned on). - **Text**: If you want the answer to show text, enter up to 200 characters of text here. - **URL**: Enter a URL that will be opened by the user's client. Refer to the **url** parameter instructions at the Telegram Bot API [answerCallbackQuery](https://core.telegram.org/bots/api#answercallbackquery) documentation for more information. ## Answer Inline Query Use this operation to send answers to callback queries sent from inline queries using the Bot API [answerInlineQuery](https://core.telegram.org/bots/api#answerinlinequery) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Callback**. - **Operation**: Select **Answer Inline Query**. - **Query ID**: Enter the unique identifier of the query you want to answer. - To feed a Query ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node triggered on the **Inline Query**. - **Results**: Enter a JSON-serialized array of results you want to use as answers to the query. Refer to the Telegram [InlineQueryResults](https://core.telegram.org/bots/api#inlinequeryresult) documentation for more information on formatting your array. Telegram allows a maximum of 50 results per query. Refer to the Telegram Bot API [answerInlineQuery](https://core.telegram.org/bots/api#answerinlinequery) documentation for more information. ### Answer Inline Query additional fields Use the **Additional Fields** to further refine the behavior of the node. Select **Add Field** to add any of the following: - **Cache Time**: The maximum amount of time in seconds that the client may cache the result of the callback query. Telegram defaults to `300` seconds for this method. - **Show Alert**: Telegram can display the answer as a notification at the top of the chat screen or as an alert. Choose whether you want to keep the default notification display (turned off) or display the answer as an alert (turned on). - **Text**: If you want the answer to show text, enter up to 200 characters of text here. - **URL**: Enter a URL that the user's client will open. # Telegram node Chat operations Use these operations to get information about chats, members, administrators, leave chat, and set chat titles and descriptions. Refer to [Telegram](../) for more information on the Telegram node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Get Chat Use this operation to get up to date information about a chat using the Bot API [getChat](https://core.telegram.org/bots/api#getchat) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Get**. - **Chat ID**: Enter the Chat ID or username of the target channel in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. Refer to the Telegram Bot API [getChat](https://core.telegram.org/bots/api#getchat) documentation for more information. ## Get Administrators Use this operation to get a list of all administrators in a chat using the Bot API [getChatAdministrators](https://core.telegram.org/bots/api#getchatadministrators) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Get Administrators**. - **Chat ID**: Enter the Chat ID or username of the target channel in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. Refer to the Telegram Bot API [getChatAdministrators](https://core.telegram.org/bots/api#getchatadministrators) documentation for more information. ## Get Chat Member Use this operation to get the details of a chat member using the Bot API [getChatMember](https://core.telegram.org/bots/api#getchatmember) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Get Member**. - **Chat ID**: Enter the Chat ID or username of the target channel in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **User ID**: Enter the unique identifier of the user whose information you want to get. Refer to the Telegram Bot API [getChatMember](https://core.telegram.org/bots/api#getchatmember) documentation for more information. ## Leave Chat Use this operation to leave a chat using the Bot API [leaveChat](https://core.telegram.org/bots/api#leavechat) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Leave**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to leave in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. Refer to the Telegram Bot API [leaveChat](https://core.telegram.org/bots/api#leavechat) documentation for more information. ## Set Description Use this operation to set the description of a chat using the Bot API [setChatDescription](https://core.telegram.org/bots/api#setchatdescription) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Set Description**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to leave in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Description**: Enter the new description you'd like to set the chat to use, maximum of 255 characters. Refer to the Telegram Bot API [setChatDescription](https://core.telegram.org/bots/api#setchatdescription) documentation for more information. ## Set Title Use this operation to set the title of a chat using the Bot API [setChatTitle](https://core.telegram.org/bots/api#setchattitle) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Chat**. - **Operation**: Select **Set Title**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to leave in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Title**: Enter the new title you'd like to set the chat to use, maximum of 255 characters. Refer to the Telegram Bot API [setChatTitle](https://core.telegram.org/bots/api#setchattitle) documentation for more information. # Telegram node common issues Here are some common errors and issues with the [Telegram node](../) and steps to resolve or troubleshoot them. ## Add a bot to a Telegram channel For a bot to send a message to a channel, you must add the bot to the channel. If you haven't added the bot to the channel, you'll see an error with a description like: `Error: Forbidden: bot is not a participant of the channel`. To add a bot to a channel: 1. In the Telegram app, access the target channel and select the channel name. 1. Label the channel name as **public channel**. 1. Select **Administrators** > **Add Admin**. 1. Search for the bot's username and select it. 1. Select the checkmark on the top-right corner to add the bot to the channel. ## Get the Chat ID You can only use `@channelusername` on public channels. To interact with a Telegram group, you need that group's Chat ID. There are three ways to get that ID: 1. From the Telegram Trigger: Use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node in your workflow to get a Chat ID. This node can trigger on different events and returns a Chat ID on successful execution. 1. From your web browser: Open Telegram in a web browser and open the group chat. The group's Chat ID is the series of digits behind the letter "g." Prefix your group Chat ID with a `-` when you enter it in n8n. 1. Invite Telegram's [@RawDataBot](https://t.me/RawDataBot) to the group: Once you add it, the bot outputs a JSON file that includes a `chat` object. The `id` for that object is the group Chat ID. Then remove the RawDataBot from your group. ## Send more than 30 messages per second The Telegram API has a [limitation](https://core.telegram.org/bots/faq#broadcasting-to-users) of sending only 30 messages per second. Follow these steps to send more than 30 messages: 1. **Loop Over Items node**: Use the [Loop Over Items](../../../core-nodes/n8n-nodes-base.splitinbatches/) node to get at most 30 chat IDs from your database. 1. **Telegram node**: Connect the Telegram node with the Loop Over Items node. Use the **Expression Editor** to select the Chat IDs from the Loop Over Items node. 1. **Code node**: Connect the [Code](../../../core-nodes/n8n-nodes-base.code/) node with the Telegram node. Use the Code node to wait for a few seconds before fetching the next batch of chat IDs. Connect this node with the Loop Over Items node. You can also use this [workflow](https://n8n.io/workflows/772). ## Remove the n8n attribution from sent messages If you're using the node to [send Telegram messages](../message-operations/#send-message), the message automatically gets an n8n attribution appended to the end: > This message was sent automatically with n8n To remove this attribution: 1. In the node's **Additional Fields** section, select **Add Field**. 1. Select **Append n8n attribution**. 1. Turn the toggle off. Refer to [Send Message additional fields](../message-operations/#send-message-additional-fields) for more information. # Telegram node File operations Use this operation to get a file from Telegram. Refer to [Telegram](../) for more information on the Telegram node itself. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). ## Get File Use this operation to get a file from Telegram using the Bot API [getFile](https://core.telegram.org/bots/api#getfile) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **File**. - **Operation**: Select **Get**. - **File ID**: Enter the ID of the file you want to get. - **Download**: Choose whether you want the node to download the file (turned on) or not (turned off). Refer to the Telegram Bot API [getFile](https://core.telegram.org/bots/api#getfile) documentation for more information. # Telegram node Message operations Use these operations to send, edit, and delete messages in a chat; send files to a chat; and pin/unpin message from a chat. Refer to [Telegram](../) for more information on the Telegram node itself. Add bot to channel To use most of these operations, you must add your bot to a channel so that it can send messages to that channel. Refer to [Common Issues | Add a bot to a Telegram channel](../common-issues/#add-a-bot-to-a-telegram-channel) for more information. This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../../advanced-ai/examples/using-the-fromai-function/). Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../../advanced-ai/human-in-the-loop-tools/). ## Delete Chat Message Use this operation to delete a message from chat using the Bot API [deleteMessage](https://core.telegram.org/bots/api#deletemessage) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Delete Chat Message**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to delete in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Message ID**: Enter the unique identifier of the message you want to delete. Refer to the Telegram Bot API [deleteMessage](https://core.telegram.org/bots/api#deletemessage) documentation for more information. ## Edit Message Text Use this operation to edit the text of an existing message using the Bot API [editMessageText](https://core.telegram.org/bots/api#editmessagetext) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Edit Message Text**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to leave in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Message ID**: Enter the unique identifier of the message you want to edit. - **Reply Markup**: Select whether to use the **Inline Keyboard** to display the InlineKeyboardMarkup **None** not to. This sets the `reply_markup` parameter. Refer to the [InlineKeyboardMarkup](https://core.telegram.org/bots/api#inlinekeyboardmarkup) documentation for more information. - **Text**: Enter the text you want to edit the message to. Refer to the Telegram Bot API [editMessageText](https://core.telegram.org/bots/api#editmessagetext) documentation for more information. ### Edit Message Text additional fields Use the **Additional Fields** to further refine the behavior of the node. Select **Add Field** to add any of the following: - **Disable WebPage Preview**: Select whether you want to enable link previews for links in this message (turned off) or disable link previews for links in this message (turned on). This sets the `link_preview_options` parameter for `is_disabled`. Refer to the [LinkPreviewOptions](https://core.telegram.org/bots/api#linkpreviewoptions) documentation for more information. - **Parse Mode**: Choose whether the message should be parsed using **HTML** (default), **Markdown (Legacy)**, or **MarkdownV2**. This sets the `parse_mode` parameter. ## Pin Chat Message Use this operation to pin a message for the chat using the Bot API [pinChatMessage](https://core.telegram.org/bots/api#pinchatmessage) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Pin Chat Message**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to pin the message to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Message ID**: Enter the unique identifier of the message you want to pin. Refer to the Telegram Bot API [pinChatMessage](https://core.telegram.org/bots/api#pinchatmessage) documentation for more information. ### Pin Chat Message additional fields Use the **Additional Fields** to further refine the behavior of the node. Select **Add Field** to add any of the following: - **Disable Notifications**: By default, Telegram will notify all chat members that the message has been pinned. If you don't want these notifications to go out, turn this control on. Sets the `disable_notification` parameter to `true`. ## Send Animation Use this operation to send GIFs or H.264/MPEG-4 AVC videos without sound up to 50 MB in size to the chat using the Bot API [sendAnimation](https://core.telegram.org/bots/api#sendanimation) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Animation**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the animation to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Animation**: If you aren't using the **Binary File**, enter the animation to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to the Telegram Bot API [sendAnimation](https://core.telegram.org/bots/api#sendanimation) documentation for more information. ### Send Animation additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendAnimation method. Select **Add Field** to add any of the following: - **Caption**: Enter a caption text for the animation, max of 1024 characters. - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Duration**: Enter the animation's duration in seconds. - **Height**: Enter the height of the animation. - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to Telegram's [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. - **Thumbnail**: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs: - JPEG format - Less than 200 KB in size - Width and height less than 320px. - **Width**: Enter the width of the video clip. ### Send Audio Use this operation to send an audio file to the chat and display it in the music player using the Bot API [sendAudio](https://core.telegram.org/bots/api#sendaudio) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Audio**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the audio to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Audio**: If you aren't using the **Binary File**, enter the audio to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to the Telegram Bot API [sendAudio](https://core.telegram.org/bots/api#sendaudio) documentation for more information. ### Send Audio additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendAudio method. Select **Add Field** to add any of the following: - **Caption**: Enter a caption text for the audio, max of 1024 characters. - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Duration**: Enter the audio's duration in seconds. - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to Telegram's [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Performer**: Enter the name of the performer. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. - **Title**: Enter the audio track's name. - **Thumbnail**: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs: - JPEG format - Less than 200 KB in size - Width and height less than 320px. ## Send Chat Action Use this operation when you need to tell the user that something is happening on the bot's side. The status is set for 5 seconds or less using the Bot API [sendChatAction](https://core.telegram.org/bots/api#sendchataction) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Chat Action**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the chat action to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Action**: Select the action you'd like to broadcast the bot as taking. The options here include: **Find Location**, **Typing**, **Recording** audio or video, and **Uploading** file types. Refer to Telegram's Bot API [sendChatAction](https://core.telegram.org/bots/api#sendchataction) documentation for more information. ## Send Document Use this operation to send a document to the chat using the Bot API [sendDocument](https://core.telegram.org/bots/api#senddocument) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Document**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the document to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Document**: If you aren't using the **Binary File**, enter the document to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to Telegram's Bot API [sendDocument](https://core.telegram.org/bots/api#sendchataction) documentation for more information. ### Send Document additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendDocument method. Select **Add Field** to add any of the following: - **Caption**: Enter a caption text for the file, max of 1024 characters. - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. - **Thumbnail**: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs: - JPEG format - Less than 200 KB in size - Width and height less than 320px. ## Send Location Use this operation to send a geolocation to the chat using the Bot API [sendLocation](https://core.telegram.org/bots/api#sendlocation) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Location**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the location to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Latitude**: Enter the latitude of the location. - **Longitude**: Enter the longitude of the location. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to Telegram's Bot API [sendLocation](https://core.telegram.org/bots/api#sendlocation) documentation for more information. ### Send Location additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendLocation method. Select **Add Field** to add any of the following: - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. ## Send Media Group Use this operation to send a group of photos and/or videos using the Bot API [sendMediaGroup](https://core.telegram.org/bots/api#sendmediagroup) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Media Group**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the media group to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Media**: Use **Add Media** to add different media types to your media group. For each medium, select: - **Type**: The type of media this is. Choose from **Photo** and **Video**. - **Media File**: Enter the media file to send. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Additional Fields**: For each media file, you can choose to add these fields: - **Caption**: Enter a caption text for the file, max of 1024 characters. - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. Refer to Telegram's Bot API [sendMediaGroup](https://core.telegram.org/bots/api#sendmediagroup) documentation for more information. ### Send Media Group additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendMediaGroup method. Select **Add Field** to add any of the following: - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. ## Send Message Use this operation to send a message to the chat using the Bot API [sendMessage](https://core.telegram.org/bots/api#sendmessage) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Message**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the message to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Text**: Enter the text to send, max 4096 characters after entities parsing. Refer to Telegram's Bot API [sendMessage](https://core.telegram.org/bots/api#sendmessage) documentation for more information. Send Message limits Telegram limits the number of messages you can send to 30 per second. If you expect to hit this limit, refer to [Send more than 30 messages per second](../common-issues/#send-more-than-30-messages-per-second) for a suggested workaround. ### Send Message additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendMessage method. Select **Add Field** to add any of the following: - **Append n8n Attribution**: Choose whether to include the phrase `This message was sent automatically with n8n` to the end of the message (turned on, default) or not (turned off). - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Disable WebPage Preview**: Select whether you want to enable link previews for links in this message (turned off) or disable link previews for links in this message (turned on). This sets the `link_preview_options` parameter for `is_disabled`. Refer to the [LinkPreviewOptions](https://core.telegram.org/bots/api#linkpreviewoptions) documentation for more information. - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to Telegram's [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. ## Send and Wait for Response Use this operation to send a message to the chat using the Bot API [`sendMessage`](https://core.telegram.org/bots/api#sendmessage) method and pause the workflow execution until the user confirms the operation. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send and Wait for Response**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the message to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Message**: Enter the text to send. - **Response Type**: The approval or response type to use: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. Refer to Telegram's Bot API [`sendMessage`](https://core.telegram.org/bots/api#sendmessage) documentation for more information. Send Message limits Telegram limits the number of messages you can send to 30 per second. If you expect to hit this limit, refer to [Send more than 30 messages per second](../common-issues/#send-more-than-30-messages-per-second) for a suggested workaround. ### Send and Wait for Response additional fields The additional fields depend on which **Response Type** you choose. #### Approval The **Approval** response type adds these options: - **Type of Approval**: Whether to present only an approval button or both an approval and disapproval buttons. - **Button Label**: The label for the approval or disapproval button. The default choice is `✅ Approve` and `❌ Decline` for approval and disapproval actions respectively. - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. #### Free Text When using the Free Text response type, the following options are available: - **Message Button Label**: The label to use for message button. The default choice is `Respond`. - **Response Form Title**: The title of the form where users provide their response. - **Response Form Description**: A description for the form where users provide their response. - **Response Form Button Label**: The label for the button on the form to submit their response. The default choice is `Submit`. - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. #### Custom Form When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. The following options are also available: - **Message Button Label**: The label to use for message button. The default choice is `Respond`. - **Response Form Title**: The title of the form where users provide their response. - **Response Form Description**: A description for the form where users provide their response. - **Response Form Button Label**: The label for the button on the form to submit their response. The default choice is `Submit`. - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. ## Send Photo Use this operation to send a photo to the chat using the Bot API [sendPhoto](https://core.telegram.org/bots/api#sendphoto) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Photo**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the photo to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Photo**: If you aren't using the **Binary File**, enter the photo to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to Telegram's Bot API [sendPhoto](https://core.telegram.org/bots/api#sendphoto) documentation for more information. ### Send Photo additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendPhoto method. Select **Add Field** to add any of the following: - **Caption**: Enter a caption text for the file, max of 1024 characters. - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to Telegram's [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. ## Send Sticker Use this method to send static .WEBP, animated .TGS, or video .WEBM stickers using the Bot API [sendSticker](https://core.telegram.org/bots/api#sendsticker) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Sticker**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the sticker to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Sticker**: If you aren't using the **Binary File**, enter the photo to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to Telegram's Bot API [sendSticker](https://core.telegram.org/bots/api#sendsticker) documentation for more information. ### Send Sticker additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendSticker method. Select **Add Field** to add any of the following: - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. ## Send Video Use this operation to send a video to the chat using the Bot API [sendVideo](https://core.telegram.org/bots/api#sendvideo) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Send Video**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to send the video to in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Binary File**: To send a binary file from the node itself, turn this option on. If you turn this parameter on, you must enter the **Input Binary Field** containing the file you want to send. - **Video**: If you aren't using the **Binary File**, enter the video to send here. Pass a `file_id` to send a file that exists on the Telegram servers (recommended) or an HTTP URL for Telegram to get a file from the internet. - **Reply Markup**: Use this parameter to set more interface options. Refer to [Reply Markup parameters](#reply-markup-parameters) for more information on these options and how to use them. Refer to Telegram's Bot API [sendVideo](https://core.telegram.org/bots/api#sendvideo) documentation for more information. ### Send Video additional fields Use the **Additional Fields** to further refine the behavior of the node using optional fields in Telegram's sendVideo method. Select **Add Field** to add any of the following: - **Caption**: Enter a caption text for the video, max of 1024 characters. - **Disable Notification**: Choose whether to send the notification silently (turned on) or with a standard notification (turned off). - **Duration**: Enter the video's duration in seconds. - **Height**: Enter the height of the video. - **Parse Mode**: Enter the parser to use for any related text. Options include **HTML** (default), **Markdown (Legacy)**, **MarkdownV2**. Refer to Telegram's [Formatting options](https://core.telegram.org/bots/api#formatting-options) for more information on these options. - **Reply To Message ID**: If the message is a reply, enter the ID of the message it's replying to. - **Message Thread ID**: Enter a unique identifier for the target message thread (topic) of the forum; for forum supergroups only. - **Thumbnail**: Add the thumbnail of the file sent. Ignore this field if thumbnail generation for the file is supported server-side. The thumbnail should meet these specs: - JPEG format - Less than 200 KB in size - Width and height less than 320px. - **Width**: Enter the width of the video. ## Unpin Chat Message Use this operation to unpin a message from the chat using the Bot API [unpinChatMessage](https://core.telegram.org/bots/api#unpinchatmessage) method. Enter these parameters: - **Credential to connect with**: Create or select an existing [Telegram credential](../../../credentials/telegram/). - **Resource**: Select **Message**. - **Operation**: Select **Pin Chat Message**. - **Chat ID**: Enter the Chat ID or username of the channel you wish to unpin the message from in the format `@channelusername`. - To feed a Chat ID directly into this node, use the [Telegram Trigger](../../../trigger-nodes/n8n-nodes-base.telegramtrigger/) node. Refer to [Common Issues | Get the Chat ID](../common-issues/#get-the-chat-id) for more information. - **Message ID**: Enter the unique identifier of the message you want to unpin. Refer to the Telegram Bot API [unpinChatMessage](https://core.telegram.org/bots/api#unpinchatmessage) documentation for more information. ## Reply Markup parameters For most of the **Message** **Send** actions (such as Send Animation, Send Audio), use the **Reply Markup** parameter to set more interface options: - **Force Reply**: The Telegram client will act as if the user has selected the bot's message and tapped **Reply**, automatically displaying a reply interface to the user. Refer to [Force Reply parameters](#force-reply-parameters) for further guidance on this option. - **Inline Keyboard**: Display an inline keyboard right next to the message. Refer to [Inline Keyboard parameters](#inline-keyboard-parameters) for further guidance on this option. - **Reply Keyboard**: Display a custom keyboard with reply options. Refer to [Reply Keyboard parameters](#reply-keyboard-parameters) for further guidance on this option. - **Reply Keyboard Remove**: The Telegram client will remove the current custom keyboard and display the default letter-keyboard. Refer to [Reply Keyboard parameters](#reply-keyboard-remove-parameters) for further guidance on this option. Telegram Business accounts Telegram restricts the following options in channels and for messages sent on behalf of a Telegram Business account: - Force Reply - Reply Keyboard - Reply Keyboard Remove ### Force Reply parameters **Force Reply** is useful if you want to create user-friendly step-by-step interfaces without having to sacrifice privacy mode. If you select **Reply Markup > Force Reply**, choose from these **Force Reply** parameters: - **Force Reply**: Turn on to show the reply interface to the user, as described above. - **Selective**: Turn this on if you want to force reply from these users only: - Users that are `@mentioned` in the text of the message. - The sender of the original message, if this Send Animation message is a reply to a message. Refer to [ForceReply](https://core.telegram.org/bots/api#forcereply) for more information. ### Inline Keyboard parameters If you select **Reply Markup > Inline Keyboard**, define the inline keyboard buttons you want to display using the **Add Button** option. To add more rows to your keyboard, use **Add Keyboard Row**. Refer to [InlineKeyboardMarkup](https://core.telegram.org/bots/api#inlinekeyboardmarkup) and [InlineKeyboardButtons](https://core.telegram.org/bots/api#inlinekeyboardbutton) for more information. ### Reply Keyboard parameters If you select **Reply Markup > Reply Keyboard**, use the **Reply Keyboard** section to define the buttons and rows in your Reply Keyboard. Use the **Reply Keyboard Options** to further refine the keyboard's behavior: - **Resize Keyboard**: Choose whether to request the Telegram client to resize the keyboard vertically for optimal fit (turned on) or whether to use the same height as the app's standard keyboard (turned off). - **One Time Keyboard**: Choose whether the Telegram client should hide the keyboard as soon as a user uses it (turned on) or to keep displaying it (turned off). - **Selective**: Turn this on if you want to show the keyboard to these users only: - Users that are `@mentioned` in the text of the message. - The sender of the original message, if this Send Animation message is a reply to a message. Refer to [ReplyKeyboardMarkup](https://core.telegram.org/bots/api#replykeyboardmarkup) for more information. ### Reply Keyboard Remove parameters If you select **Reply Markup > Reply Keyboard Remove**, choose from these **Reply Keyboard Remove** parameters: - **Remove Keyboard**: Choose whether to request the Telegram client to remove the custom keyboard (turned on) or to keep it (turned off). - **Selective**: Turn this on if you want to remove the keyboard for these users only: - Users that are `@mentioned` in the text of the message. - The sender of the original message, if this Send Animation message is a reply to a message. Refer to [ReplyKeyboardRemove](https://core.telegram.org/bots/api#replykeyboardremove) for more information. # WhatsApp Business Cloud node Use the WhatsApp Business Cloud node to automate work in WhatsApp Business, and integrate WhatsApp Business with other applications. n8n has built-in support for a wide range of WhatsApp Business features, including sending messages, and uploading, downloading, and deleting media. On this page, you'll find a list of operations the WhatsApp Business Cloud node supports and links to more resources. Credentials Refer to [WhatsApp Business Cloud credentials](../../credentials/whatsapp/) for guidance on setting up authentication. Human-in-the-loop for AI tool calls This node can be used as a human review step for AI Agent tool calls. When configured this way, the AI Agent will pause and request human approval through this service before executing tools that require oversight. Learn more in [Human-in-the-loop for AI tool calls](../../../../advanced-ai/human-in-the-loop-tools/). ## Operations - Message - Send - Send and Wait for Response - Send Template - Media - Upload - Download - Delete ## Waiting for a response By choosing the **Send and Wait for a Response** operation, you can send a message and pause the workflow execution until a person confirms the action or provides more information. ### Response Type You can choose between the following types of waiting and approval actions: - **Approval**: Users can approve or disapprove from within the message. - **Free Text**: Users can submit a response with a form. - **Custom Form**: Users can submit a response with a custom form. You can customize the waiting and response behavior depending on which response type you choose. You can configure these options in any of the above response types: - **Limit Wait Time**: Whether the workflow will automatically resume execution after a specified time limit. This can be an interval or a specific wall time. - **Append n8n Attribution**: Whether to mention in the message that it was sent automatically with n8n (turned on) or not (turned off). ### Approval response customization When using the Approval response type, you can choose whether to present only an approval button or both approval *and* disapproval buttons. You can also customize the button labels for the buttons you include. ### Free Text response customization When using the Free Text response type, you can customize the message button label, the form title and description, and the response button label. ### Custom Form response customization When using the Custom Form response type, you build a form using the fields and options you want. You can customize each form element with the settings outlined in the [n8n Form trigger's form elements](../../core-nodes/n8n-nodes-base.formtrigger/#form-elements). To add more fields, select the **Add Form Element** button. You'll also be able to customize the message button label, the form title and description, and the response button label. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Respond to WhatsApp Messages with AI Like a Pro!** by Jimleuk [View template details](https://n8n.io/workflows/2466-respond-to-whatsapp-messages-with-ai-like-a-pro/) **AI-Powered WhatsApp Chatbot 🤖📲 for Text, Voice, Images & PDFs with memory 🧠** by Davide Boizza [View template details](https://n8n.io/workflows/3586-ai-powered-whatsapp-chatbot-for-text-voice-images-and-pdfs-with-memory/) [Browse WhatsApp Business Cloud integration templates](https://n8n.io/integrations/whatsapp-business-cloud/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [WhatsApp Business Platform's Cloud API documentation](https://developers.facebook.com/docs/whatsapp/cloud-api) for details about the operations. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. # WhatsApp Business Cloud node common issues Here are some common errors and issues with the [WhatsApp Business Cloud node](../) and steps to resolve or troubleshoot them. ## Bad request - please check your parameters This error occurs when WhatsApp Business Cloud rejects your request because of a problem with its parameters. It's common to see this when using the **Send Template** operation if the data you send doesn't match the format of your template. To resolve this issue, review the parameters in your [message template](https://www.facebook.com/business/help/2055875911147364?id=2129163877102343). Pay attention to each parameter's data type and the order they're defined in the template. Check the data that n8n is mapping to the template parameters. If you're using expressions to set parameter values, check the input data to make sure each item resolves to a valid value. You may want to use the [Edit Fields (Set) node](../../../core-nodes/n8n-nodes-base.set/) or set a fallback value to ensure you send a value with the correct format. ## Working with non-text media The WhatsApp Business Cloud node can work with non-text messages and media like images, audio, documents, and more. If your operation includes a **Input Data Field Name** or **Property Name** parameter, set this to the field name itself rather than referencing the data in an expression. For example, if you are trying to send a message with an "Image" **MessageType** and **Take Image From** set to "n8n", set **Input Data Field Name** to a field name like `data` instead of an expression like `{{ $json.input.data }}`. # OpenAI node Use the OpenAI node to automate work in OpenAI and integrate OpenAI with other applications. n8n has built-in support for a wide range of OpenAI features, including creating images and assistants, as well as chatting with models. On this page, you'll find a list of operations the OpenAI node supports and links to more resources. Previous node versions The OpenAI node replaces the OpenAI assistant node from version 1.29.0 on. n8n version 1.117.0 introduces V2 of the OpenAI node that supports the OpenAI Responses API and removes support for the [to-be-deprecated Assistants API](https://platform.openai.com/docs/assistants/migration). Credentials Refer to [OpenAI credentials](../../credentials/openai/) for guidance on setting up authentication. ## Operations - **Text** - [**Generate a Chat Completion**](text-operations/#generate-a-chat-completion) - [**Generate a Model Response**](text-operations/#generate-a-model-response) - [**Classify Text for Violations**](text-operations/#classify-text-for-violations) - **Image** - [**Analyze Image**](image-operations/#analyze-image) - [**Generate an Image**](image-operations/#generate-an-image) - [**Edit an Image**](image-operations/#edit-an-image) - **Audio** - [**Generate Audio**](audio-operations/#generate-audio) - [**Transcribe a Recording**](audio-operations/#transcribe-a-recording) - [**Translate a Recording**](audio-operations/#translate-a-recording) - **File** - [**Delete a File**](file-operations/#delete-a-file) - [**List Files**](file-operations/#list-files) - [**Upload a File**](file-operations/#upload-a-file) - **Video** - [**Generate a Video**](video-operations/#generate-video) - **Conversation** - [**Create a Conversation**](conversation-operations/#create-a-conversation) - [**Get a Conversation**](conversation-operations/#get-a-conversation) - [**Update a Conversation**](conversation-operations/#update-a-conversation) - [**Remove a Conversation**](conversation-operations/#remove-a-conversation) ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Angie, personal AI assistant with Telegram voice and text** by Derek Cheung [View template details](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) [Browse OpenAI integration templates](https://n8n.io/integrations/openai/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [OpenAI's documentation](https://beta.openai.com/docs/introduction) for more information about the service. Refer to [OpenAI's assistants documentation](https://platform.openai.com/docs/assistants/how-it-works/objects) for more information about how assistants work. For help dealing with rate limits, refer to [Handling rate limits](../../rate-limits/). ## What to do if your operation isn't supported If this node doesn't support the operation you want to do, you can use the [HTTP Request node](../../core-nodes/n8n-nodes-base.httprequest/) to call the service's API. You can use the credential you created for this service in the HTTP Request node: 1. In the HTTP Request node, select **Authentication** > **Predefined Credential Type**. 1. Select the service you want to connect to. 1. Select your credential. Refer to [Custom API operations](../../../custom-operations/) for more information. ## Using tools with OpenAI assistants Some operations allow you to connect tools. [Tools](../../../../advanced-ai/examples/understand-tools/) act like addons that your AI can use to access extra context or resources. Select the **Tools** connector to browse the available tools and add them. Once you add a tool connection, the OpenAI node becomes a [root node](../../../../glossary/#root-node-n8n), allowing it to form a [cluster node](../../../../glossary/#cluster-node-n8n) with the tools [sub-nodes](../../../../glossary/#sub-node-n8n). See [Node types](../../node-types/#cluster-nodes) for more information on cluster nodes and root nodes. ### Operations that support tool connectors - **Text** - [**Generate a Chat Completion**](text-operations/#generate-a-chat-completion) - [**Generate a Model Response**](text-operations/#generate-a-model-response) ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). # OpenAI Assistant operations Use this operation to create, delete, list, message, or update an assistant in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. Assistant operations deprecated in OpenAI node V2 n8n version 1.117.0 introduces V2 of the OpenAI node that supports the OpenAI Responses API and removes support for the [to-be-deprecated Assistants API](https://platform.openai.com/docs/assistants/migration). ## Create an Assistant Use this operation to create a new assistant. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Assistant**. - **Operation**: Select **Create an Assistant**. - **Model**: Select the model that the assistant will use. If you’re not sure which model to use, try `gpt-4o` if you need high intelligence or `gpt-4o-mini` if you need the fastest speed and lowest cost. Refer to [Models overview | OpenAI Platform](https://platform.openai.com/docs/models) for more information. - **Name**: Enter the name of the assistant. The maximum length is 256 characters. - **Description**: Enter the description of the assistant. The maximum length is 512 characters. ``` A virtual assistant that helps users with daily tasks, including setting reminders, answering general questions, and providing quick information. ``` - **Instructions**: Enter the system instructions that the assistant uses. The maximum length is 32,768 characters. Use this to specify the persona used by the model in its replies. ``` Always respond in a friendly and engaging manner. When a user asks a question, provide a concise answer first, followed by a brief explanation or additional context if necessary. If the question is open-ended, offer a suggestion or ask a clarifying question to guide the conversation. Keep the tone positive and supportive, and avoid technical jargon unless specifically requested by the user. ``` - **Code Interpreter**: Turn on to enable the code interpreter for the assistant, where it can write and execute code in a sandbox environment. Enable this tool for tasks that require computations, data analysis, or any logic-based processing. - **Knowledge Retrieval**: Turn on to enable knowledge retrieval for the assistant, allowing it to access external sources or a connected knowledge base. Refer to [File Search | OpenAI Platform](https://platform.openai.com/docs/assistants/tools/file-search) for more information. - **Files**: Select a file to upload for your external knowledge source. Use **Upload a File** operation to add more files. ### Options - **Output Randomness (Temperature)**: Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to `1.0`. - **Output Randomness (Top P)**: Adjust the Top P setting to control the diversity of the assistant's responses. For example, `0.5` means half of all likelihood-weighted options are considered. We recommend altering this or **Output Randomness (Temperature)** but not both. Defaults to `1.0`. - **Fail if Assistant Already Exists**: If enabled, the operation will fail if an assistant with the same name already exists. Refer to [Create assistant | OpenAI](https://platform.openai.com/docs/api-reference/assistants/createAssistant) documentation for more information. ## Delete an Assistant Use this operation to delete an existing assistant from your account. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Assistant**. - **Operation**: Select **Delete an Assistant**. - **Assistant**: Select the assistant you want to delete **From list** or **By ID**. Refer to [Delete assistant | OpenAI](https://platform.openai.com/docs/api-reference/assistants/deleteAssistant) documentation for more information. ## List Assistants Use this operation to retrieve a list of assistants in your organization. - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Assistant**. - **Operation**: Select **List Assistants**. ### Options - **Simplify Output**: Turn on to return a simplified version of the response instead of the raw data. This option is enabled by default. Refer to [List assistants | OpenAI](https://platform.openai.com/docs/api-reference/assistants/listAssistants) documentation for more information. ## Message an Assistant Use this operation to send a message to an assistant and receive a response. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Assistant**. - **Operation**: Select **Message an Assistant**. - **Assistant**: Select the assistant you want to message. - **Prompt**: Enter the text prompt or message that you want to send to the assistant. - **Connected Chat Trigger Node**: Automatically use the input from a previous node's `chatInput` field. - **Define Below**: Manually define the prompt by entering static text or using an expression to reference data from previous nodes. ### Options - **Base URL**: Enter the base URL that the assistant should use for making API requests. This option is useful for directing the assistant to use endpoints provided by other LLM providers that offer an OpenAI-compatible API. - **Max Retries**: Specify the number of times the assistant should retry an operation in case of failure. - **Timeout**: Set the maximum amount of time in milliseconds, that the assistant should wait for a response before timing out. Use this option to prevent long waits during operations. - **Preserve Original Tools**: Turn off to remove the original tools associated with the assistant. Use this if you want to temporarily remove tools for this specific operation. Refer to [Assistants | OpenAI](https://platform.openai.com/docs/api-reference/assistants) documentation for more information. ## Update an Assistant Use this operation to update the details of an existing assistant. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Assistant**. - **Operation**: Select **Update an Assistant**. - **Assistant**: Select the assistant you want to update. ### Options - **Code Interpreter**: Turn on to enable the code interpreter for the assistant, where it can write and execute code in a sandbox environment. Enable this tool for tasks that require computations, data analysis, or any logic-based processing. - **Description**: Enter the description of the assistant. The maximum length is 512 characters. ``` A virtual assistant that helps users with daily tasks, including setting reminders, answering general questions, and providing quick information. ``` - **Instructions**: Enter the system instructions that the assistant uses. The maximum length is 32,768 characters. Use this to specify the persona used by the model in its replies. ``` Always respond in a friendly and engaging manner. When a user asks a question, provide a concise answer first, followed by a brief explanation or additional context if necessary. If the question is open-ended, offer a suggestion or ask a clarifying question to guide the conversation. Keep the tone positive and supportive, and avoid technical jargon unless specifically requested by the user. ``` - **Knowledge Retrieval**: Turn on to enable knowledge retrieval for the assistant, allowing it to access external sources or a connected knowledge base. Refer to [File Search | OpenAI Platform](https://platform.openai.com/docs/assistants/tools/file-search) for more information. - **Files**: Select a file to upload for your external knowledge source. Use [**Upload a File**](../file-operations/#upload-a-file) operation to add more files. Note that this only updates the [Code Interpreter](https://platform.openai.com/docs/assistants/tools/code-interpreter) tool, not the [File Search](https://platform.openai.com/docs/assistants/tools/file-search) tool. - **Model**: Select the model that the assistant will use. If you’re not sure which model to use, try `gpt-4o` if you need high intelligence or `gpt-4o-mini` if you need the fastest speed and lowest cost. Refer to [Models overview | OpenAI Platform](https://platform.openai.com/docs/models) for more information. - **Name**: Enter the name of the assistant. The maximum length is 256 characters. - **Remove All Custom Tools (Functions)**: Turn on to remove all custom tools (functions) from the assistant. - **Output Randomness (Temperature)**: Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to `1.0`. - **Output Randomness (Top P)**: Adjust the Top P setting to control the diversity of the assistant's responses. For example, `0.5` means half of all likelihood-weighted options are considered. We recommend altering this or **Output Randomness (Temperature)** but not both. Defaults to `1.0`. Refer to [Modify assistant | OpenAI](https://platform.openai.com/docs/api-reference/assistants/modifyAssistant) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # OpenAI Audio operations Use this operation to generate an audio, or transcribe or translate a recording in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. ## Generate Audio Use this operation to create audio from a text prompt. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Audio**. - **Operation**: Select **Generate Audio**. - **Model**: Select the model you want to use to generate the audio. Refer to [TTS | OpenAI](https://platform.openai.com/docs/models/tts) for more information. - **TTS-1**: Use this to optimize for speed. - **TTS-1-HD**: Use this to optimize for quality. - **Text Input**: Enter the text to generate the audio for. The maximum length is 4096 characters. - **Voice**: Select a voice to use when generating the audio. Listen to the previews of the voices in [Text to speech guide | OpenAI](https://platform.openai.com/docs/guides/text-to-speech/quickstart). ### Options - **Response Format**: Select the format for the audio response. Choose from **MP3** (default), **OPUS**, **AAC**, **FLAC**, **WAV**, and **PCM**. - **Audio Speed**: Enter the speed for the generated audio from a value from `0.25` to `4.0`. Defaults to `1`. - **Put Output in Field**: Defaults to `data`. Enter the name of the output field to put the binary file data in. Refer to [Create speech | OpenAI](https://platform.openai.com/docs/api-reference/audio/createSpeech) documentation for more information. ## Transcribe a Recording Use this operation to transcribe audio into text. OpenAI API limits the size of the audio file to 25 MB. OpenAI will use the `whisper-1` model by default. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Audio**. - **Operation**: Select **Transcribe a Recording**. - **Input Data Field Name**: Defaults to `data`. Enter the name of the binary property that contains the audio file in one of these formats: `.flac`, `.mp3`, `.mp4`, `.mpeg`, `.mpga`, `.m4a`, `.ogg`, `.wav`, or `.webm`. ### Options - **Language of the Audio File**: Enter the language of the input audio in [ISO-639-1](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes). Use this option to improve accuracy and latency. - **Output Randomness (Temperature)**: Defaults to `1.0`. Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Refer to [Create transcription | OpenAI](https://platform.openai.com/docs/api-reference/audio/createTranscription) documentation for more information. ## Translate a Recording Use this operation to translate audio into English. OpenAI API limits the size of the audio file to 25 MB. OpenAI will use the `whisper-1` model by default. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Audio**. - **Operation**: Select **Translate a Recording**. - **Input Data Field Name**: Defaults to `data`. Enter the name of the binary property that contains the audio file in one of these formats: `.flac`, `.mp3`, `.mp4`, `.mpeg`, `.mpga`, `.m4a`, `.ogg`, `.wav`, or `.webm`. ### Options - **Output Randomness (Temperature)**: Defaults to `1.0`. Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around 0.7) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Refer to [Create transcription | OpenAI](https://platform.openai.com/docs/api-reference/audio/createTranscription) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # OpenAI node common issues Here are some common errors and issues with the [OpenAI node](../) and steps to resolve or troubleshoot them. ## The service is receiving too many requests from you This error displays when you've exceeded [OpenAI's rate limits](https://platform.openai.com/docs/guides/rate-limits). There are two ways to work around this issue: 1. Split your data up into smaller chunks using the [Loop Over Items](../../../core-nodes/n8n-nodes-base.splitinbatches/) node and add a [Wait](../../../core-nodes/n8n-nodes-base.wait/) node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template. ``` { "nodes": [ { "parameters": {}, "id": "35d05920-ad75-402a-be3c-3277bff7cc67", "name": "When clicking ‘Execute workflow’", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 880, 400 ] }, { "parameters": { "batchSize": 500, "options": {} }, "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6", "name": "Loop Over Items", "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 1120, 420 ] }, { "parameters": { "resource": "chat", "options": {}, "requestOptions": {} }, "id": "a519f271-82dc-4f60-8cfd-533dec580acc", "name": "OpenAI", "type": "n8n-nodes-base.openAi", "typeVersion": 1, "position": [ 1380, 440 ] }, { "parameters": { "unit": "minutes" }, "id": "562d9da3-2142-49bc-9b8f-71b0af42b449", "name": "Wait", "type": "n8n-nodes-base.wait", "typeVersion": 1, "position": [ 1620, 440 ], "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13" } ], "connections": { "When clicking ‘Execute workflow’": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ null, [ { "node": "OpenAI", "type": "main", "index": 0 } ] ] }, "OpenAI": { "main": [ [ { "node": "Wait", "type": "main", "index": 0 } ] ] }, "Wait": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] } }, "pinData": {} } ``` 1. Use the [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) node with the built-in batch-limit option against the [OpenAI API](https://platform.openai.com/docs/quickstart) instead of using the OpenAI node. ## Insufficient quota Quota issues There are a number of OpenAI issues surrounding quotas, including failures when quotas have been recently topped up. To avoid these issues, ensure that there is credit in the account and issue a new API key from the [API keys screen](https://platform.openai.com/settings/organization/api-keys). This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit. To troubleshoot this error, on your [OpenAI settings](https://platform.openai.com/settings/organization/billing/overview) page: - Select the correct organization for your API key in the first selector in the upper-left corner. - Select the correct project for your API key in the second selector in the upper-left corner. - Check the organization-level [billing overview](https://platform.openai.com/settings/organization/billing/overview) page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page. - Check the organization-level [usage limits](https://platform.openai.com/settings/organization/limits) page. Double-check that you select the correct organization for this page and scroll to the **Usage limits** section to verify that you haven't exceeded your organization's usage limits. - Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select **Project** > **Limits** to view or change the project limits. - Check that the [OpenAI API](https://status.openai.com/) is operating as expected. Balance waiting period After topping up your balance, there may be a delay before your OpenAI account reflects the new balance. In n8n: - check that the [OpenAI credentials](../../../credentials/openai/) use a valid [OpenAI API key](https://platform.openai.com/api-keys) for the account you've added money to - ensure that you connect the [OpenAI node](../) to the correct [OpenAI credentials](../../../credentials/openai/) If you find yourself frequently running out of account credits, consider turning on auto recharge in your [OpenAI billing settings](https://platform.openai.com/settings/organization/billing/overview) to automatically reload your account with credits when your balance reaches $0. ## Bad request - please check your parameters This error displays when the request results in an error but n8n wasn't able to interpret the error message from OpenAI. To begin troubleshooting, try running the same operation using the [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) node, which should provide a more detailed error message. ## Referenced node is unexecuted This error displays when a previous node in the workflow hasn't executed and isn't providing output that this node needs as input. The full text of this error will tell you the exact node that isn't executing in this format: ``` An expression references the node '', but it hasn’t been executed yet. Either change the expression, or re-wire your workflow to make sure that node executes first. ``` To begin troubleshooting, test the workflow up to the named node. For nodes that call JavaScript or other custom code, determine if a node has executed before trying to use the value by calling: ``` $("").isExecuted ``` # OpenAI Conversation operations Use this operation to create, get, update, or remove a conversation in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. ## Create a Conversation Use this operation to create a new conversation. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Conversation**. - **Operation**: Select **Create a Conversation**. - **Messages**: A message input to the model. Messages with the `system` role take precedence over instructions given with the `user` role. Messages with the `assistant` role will be assumed to have been generated by the model in previous interactions. ### Options - **Metadata**: A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching via the API or in the dashboard. Refer to [Conversations | OpenAI](https://platform.openai.com/docs/api-reference/conversations/create) documentation for more information. ## Get a Conversation Use this operation to retrieve an existing conversation. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Conversation**. - **Operation**: Select **Get Conversation**. - **Conversation ID**: The ID of the conversation to retrieve. Refer to [Conversations | OpenAI](https://platform.openai.com/docs/api-reference/conversations/create) documentation for more information. ## Remove a Conversation Use this operation to remove an existing conversation. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Conversation**. - **Operation**: Select **Remove Conversation**. - **Conversation ID**: The ID of the conversation to remove. Refer to [Conversations | OpenAI](https://platform.openai.com/docs/api-reference/conversations/create) documentation for more information. ## Update a Conversation Use this operation to update an existing conversation. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Conversation**. - **Operation**: Select **Update a Conversation**. - **Conversation ID**: The ID of the conversation to update. ### Options - **Metadata**: A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching via the API or in the dashboard. Refer to [Conversations | OpenAI](https://platform.openai.com/docs/api-reference/conversations/create) documentation for more information. # OpenAI File operations Use this operation to create, delete, list, message, or update a file in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. ## Delete a File Use this operation to delete a file from the server. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **File**. - **Operation**: Select **Delete a File**. - **File**: Enter the ID of the file to use for this operation or select the file name from the dropdown. Refer to [Delete file | OpenAI](https://platform.openai.com/docs/api-reference/files/delete) documentation for more information. ## List Files Use this operation to list files that belong to the user's organization. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **File**. - **Operation**: Select **List Files**. ### Options - **Purpose**: Use this to only return files with the given purpose. Use **Assistants** to return only files related to Assistants and Message operations. Use **Fine-Tune** for files related to [Fine-tuning](https://platform.openai.com/docs/api-reference/fine-tuning). Refer to [List files | OpenAI](https://platform.openai.com/docs/api-reference/files/list) documentation for more information. ## Upload a File Use this operation to upload a file. This can be used across various operations. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **File**. - **Operation**: Select **Upload a File**. - **Input Data Field Name**: Defaults to `data`. Enter the name of the binary property which contains the file. The size of individual files can be a maximum of 512 MB or 2 million tokens for Assistants. ### Options - **Purpose**: Enter the intended purpose of the uploaded file. Use **Assistants** for files associated with Assistants and Message operations. Use **Fine-Tune** for [Fine-tuning](https://platform.openai.com/docs/api-reference/fine-tuning). Refer to [Upload file | OpenAI](https://platform.openai.com/docs/api-reference/files/create) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # OpenAI Image operations Use this operation to analyze or generate an image in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. ## Analyze Image Use this operation to take in images and answer questions about them. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Image**. - **Operation**: Select **Analayze Image**. - **Model**: Select the model you want to use to analyze an image. - **Text Input**: Ask a question about the image. - **Input Type**: Select how you'd like to input the image. Options include: - **Image URL(s)**: Enter the **URL(s)** of the image(s) to analyze. Add multiple URLs in a comma-separated list. - **Binary File(s)**: Enter the name of the binary property which contains the image(s) in the **Input Data Field Name**. ### Options - **Detail**: Specify the balance between response time versus token usage. - **Length of Description (Max Tokens)**: Defaults to 300. Fewer tokens will result in shorter, less detailed image description. Refer to [Images | OpenAI](https://platform.openai.com/docs/api-reference/images) documentation for more information. ## Generate an Image Use this operation to create an image from a text prompt. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Image**. - **Operation**: Select **Generate an Image**. - **Model**: Select the model you want to use to generate an image. - **Prompt**: Enter the text description of the desired image(s). The maximum length is 1000 characters for `dall-e-2` and 4000 characters for `dall-e-3`. ### Options - **Quality**: The quality of the image you generate. **HD** creates images with finer details and greater consistency across the image. This option is only supported for `dall-e-3`. Otherwise, choose **Standard**. - **Resolution**: Select the resolution of the generated images. Select **1024x1024** for `dall-e-2`. Select one of **1024x1024**, **1792x1024**, or **1024x1792** for `dall-e-3` models. - **Style**: Select the style of the generated images. This option is only supported for `dall-e-3`. - **Natural**: Use this to produce more natural looking images. - **Vivid**: Use this to produce hyper-real and dramatic images. - **Respond with image URL(s)**: Whether to return image URL(s) instead of binary file(s). - **Put Output in Field**: Defaults to `data`. Enter the name of the output field to put the binary file data in. Only available if **Respond with image URL(s)** is turned off. Refer to [Create image | OpenAI](https://platform.openai.com/docs/api-reference/images/create) documentation for more information. ## Edit an Image Use this operation to edit an image from a text prompt. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Image**. - **Operation**: Select **Edit Image**. - **Model**: Select the model you want to use to generate an image. Supports `dall-e-2` and `gpt-image-1`. - **Prompt**: Enter the text description of the desired edits to the input image(s). - **Image(s)**: Add one or more binary fields to include images with your prompt. Each image should be a png, webp, or jpg file less than 50MB. You can provide up to 16 images. - **Number of Images**: The number of images to generate. Must be between 1 and 10. - **Size**: The size and dimensions of the generated images (in px). - **Quality**: The quality of the image that will be generated (auto, low, medium, high, standard). Only supported for `gpt-image-1`. - **Output Format**: The format in which the generated images are returned (png, webp, or jpg). Only supported for gpt-image-1. - **Output Compression**: The compression level (0-100%) for the generated images. Only supported for `gpt-image-1` with webp or jpeg output formats. ### Options - **Background**: Allows to set transparency for the background of the generated image(s). Only supported for `gpt-image-1`. - **Input Fidelity**: Control how much effort the model will exert to match the style and features of input images. Only supported for `gpt-image-1`. - **Image Mask**: Name of the binary property that contains the image. A second image whose fully transparent areas (for example, where alpha is zero) shows where the image should be edited. If there are multiple images provided, the mask will be applied on the first image. Must be a valid PNG file, less than 4MB, and have the same dimensions as image. - **User**: A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # OpenAI Text operations Use this operation to message a model or classify text for violations in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. Previous node versions n8n version 1.117.0 introduces the OpenAI node V2 that supports the OpenAI Responses API. It renames the 'Message a Model' operation to 'Generate a Chat Completion' to clarify its association with the Chat Completions API and introduces a separate 'Generate a Model Response' operation that uses the Responses API. ## Generate a Chat Completion Use this operation to send a message or prompt to an OpenAI model - using the Chat Completions API - and receive a response. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Text**. - **Operation**: Select **Generate a Chat Completion**. - **Model**: Select the model you want to use. If you’re not sure which model to use, try `gpt-4o` if you need high intelligence or `gpt-4o-mini` if you need the fastest speed and lowest cost. Refer to [Models overview | OpenAI Platform](https://platform.openai.com/docs/models) for more information. - **Messages**: Enter a **Text** prompt and assign a **Role** that the model will use to generate responses. Refer to [Prompt engineering | OpenAI](https://platform.openai.com/docs/guides/prompt-engineering) for more information on how to write a better prompt by using these roles. Choose from one of these roles: - **User**: Sends a message as a user and gets a response from the model. - **Assistant**: Tells the model to adopt a specific tone or personality. - **System**: By default, there is no system message. You can define instructions in the user message, but the instructions set in the system message are more effective. You can set more than one system message per conversation. Use this to set the model's behavior or context for the next user message. - **Simplify Output**: Turn on to return a simplified version of the response instead of the raw data. - **Output Content as JSON**: Turn on to attempt to return the response in JSON format. Compatible with `GPT-4 Turbo` and all `GPT-3.5 Turbo` models newer than `gpt-3.5-turbo-1106`. ### Options - **Frequency Penalty**: Apply a penalty to reduce the model's tendency to repeat similar lines. The range is between `0.0` and `2.0`. - **Maximum Number of Tokens**: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output. - **Number of Completions**: Defaults to 1. Set the number of completions you want to generate for each prompt. Use carefully since setting a high number will quickly consume your tokens. - **Presence Penalty**: Apply a penalty to influence the model to discuss new topics. The range is between `0.0` and `2.0`. - **Output Randomness (Temperature)**: Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around `0.7`) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to `1.0`. - **Output Randomness (Top P)**: Adjust the Top P setting to control the diversity of the assistant's responses. For example, `0.5` means half of all likelihood-weighted options are considered. We recommend altering this or **Output Randomness (Temperature)** but not both. Defaults to `1.0`. Refer to [Chat Completions | OpenAI](https://platform.openai.com/docs/api-reference/chat) documentation for more information. ## Generate a Model Response Use this operation to send a message or prompt to an OpenAI model - using the Responses API - and receive a response. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Text**. - **Operation**: Select **Generate a Model Response**. - **Model**: Select the model you want to use. Refer to [Models overview | OpenAI Platform](https://platform.openai.com/docs/models) for an overview. - **Messages**: Choose from one of these a **Message Types**: - **Text**: Enter a **Text** prompt and assign a **Role** that the model will use to generate responses. Refer to [Prompt engineering | OpenAI](https://platform.openai.com/docs/guides/prompt-engineering) for more information on how to write a better prompt by using these roles. - **Image**: Provide an **Image** either through an Image URL, a File ID (using the [OpenAI Files API](https://platform.openai.com/docs/api-reference/files)) or by passing binary data from an earlier node in your workflow. - **File**: Provide a **File** in a supported format (currently: PDF only), either through a File URL, a File ID (using the [OpenAI Files API](https://platform.openai.com/docs/api-reference/files)) or by passing binary data from an earlier node in your workflow. - For any message type, you can choose from one of these roles: - **User**: Sends a message as a user and gets a response from the model. - **Assistant**: Tells the model to adopt a specific tone or personality. - **System**: By default, the system message is `"You are a helpful assistant"`. You can define instructions in the user message, but the instructions set in the system message are more effective. You can only set one system message per conversation. Use this to set the model's behavior or context for the next user message. - **Simplify Output**: Turn on to return a simplified version of the response instead of the raw data. ### Built-in Tools The OpenAI Responses API provides a range of [built-in tools](https://platform.openai.com/docs/guides/tools) to enrich the model's response: - **Web Search**: Allows models to search the web for the latest information before generating a response. - **MCP Servers**: Allows models to connect to remote MCP servers. Find out more about using remote MCP servers as tools [here](https://platform.openai.com/docs/guides/tools-connectors-mcp). - **File Search**: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the [OpenAI documentation](https://platform.openai.com/docs/guides/tools-file-search) for more information. - **Code Interpreter**: Allows models to write and run Python code in a sandboxed environment. ### Options - **Maximum Number of Tokens**: Set the maximum number of tokens for the response. One token is roughly four characters for standard English text. Use this to limit the length of the output. - **Output Randomness (Temperature)**: Adjust the randomness of the response. The range is between `0.0` (deterministic) and `1.0` (maximum randomness). We recommend altering this or **Output Randomness (Top P)** but not both. Start with a medium temperature (around `0.7`) and adjust based on the outputs you observe. If the responses are too repetitive or rigid, increase the temperature. If they’re too chaotic or off-track, decrease it. Defaults to `1.0`. - **Output Randomness (Top P)**: Adjust the Top P setting to control the diversity of the assistant's responses. For example, `0.5` means half of all likelihood-weighted options are considered. We recommend altering this or **Output Randomness (Temperature)** but not both. Defaults to `1.0`. - **Conversation ID**: The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes. - **Previous Response ID**: The ID of the previous response to continue from. Can't be used in conjunction with Conversation ID. - **Reasoning**: The level of reasoning effort the model should spend to generate the response. Includes the ability to return a **Summary** of the reasoning performed by the model (for example, for debugging purposes). - **Store**: Whether to store the generated model response for later retrieval via API. Defaults to `true`. - **Output Format**: Whether to return the response as **Text**, in a specified **JSON Schema** or as a **JSON Object**. - **Background**: Whether to run the model in [background mode](https://platform.openai.com/docs/guides/background). This allows executing long-running tasks more reliably. Refer to [Responses | OpenAI](https://platform.openai.com/docs/api-reference/responses/create) documentation for more information. ## Classify Text for Violations Use this operation to identify and flag content that might be harmful. OpenAI model will analyze the text and return a response containing: - `flagged`: A boolean field indicating if the content is potentially harmful. - `categories`: A list of category-specific violation flags. - `category_scores`: Scores for each category. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Text**. - **Operation**: Select **Classify Text for Violations**. - **Text Input**: Enter text to classify if it violates the moderation policy. - **Simplify Output**: Turn on to return a simplified version of the response instead of the raw data. ### Options - **Use Stable Model**: Turn on to use the stable version of the model instead of the latest version, accuracy may be slightly lower. Refer to [Moderations | OpenAI](https://platform.openai.com/docs/api-reference/moderations) documentation for more information. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](../common-issues/). # OpenAI Video operations Use this operation to generate a video in OpenAI. Refer to [OpenAI](../) for more information on the OpenAI node itself. ## Generate Video Use this operation to generate a video from a text prompt. Enter these parameters: - **Credential to connect with**: Create or select an existing [OpenAI credential](../../../credentials/openai/). - **Resource**: Select **Video**. - **Operation**: Select **Generate Video**. - **Model**: Select the model you want to use to generate a video. Currently supports `sora-2` and `sora-2-pro`. - **Prompt**: The prompt to generate a video from. - **Seconds**: Clip duration in seconds (up to 25). - **Size**: Output resolution formatted as width x height. 1024x1792 and 1792x1024 are only supported by Sora 2 Pro. ### Options - **Reference**: Optional image reference that guides generation. Has to be passed in as a binary item. - **Wait Timeout**: Time to wait for the video to be generated in seconds. Defaults to 300. - **Output Field Name**: The name of the output field to put the binary file data in. Defaults to `data`. Refer to [Video Generation | OpenAI](https://platform.openai.com/docs/guides/video-generation) for more information. # Cluster nodes [Cluster nodes](../../../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../../../glossary/#root-node-n8n) and one or more [sub-nodes](../../../glossary/#sub-node-n8n) that extend the functionality of the node. ## Root nodes Each cluster starts with one [root node](../../../glossary/#root-node-n8n). ## Sub-nodes Each root node can have one or more [sub-nodes](../../../glossary/#sub-node-n8n) attached to it. # Root nodes Root nodes are the foundational nodes within a group of cluster nodes. [Cluster nodes](../../../../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../../../../glossary/#root-node-n8n) and one or more [sub-nodes](../../../../glossary/#sub-node-n8n) that extend the functionality of the node. # Basic LLM Chain node Use the Basic LLM Chain node to set the prompt that the model will use along with setting an optional parser for the response. On this page, you'll find the node parameters for the Basic LLM Chain node and links to more resources. Examples and templates For usage examples and templates to help you get started, refer to n8n's [Basic LLM Chain integrations](https://n8n.io/integrations/basic-llm-chain/) page. ## Node parameters ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Chat Messages Use **Chat Messages** when you're using a chat model to set a message. n8n ignores these options if you don't connect a chat model. Select the **Type Name or ID** you want the node to use: #### AI Enter a sample expected response in the **Message** field. The model will try to respond in the same way in its messages. #### System Enter a system **Message** to include with the user input to help guide the model in what it should do. Use this option for things like defining tone, for example: `Always respond talking like a pirate`. #### User Enter a sample user input. Using this with the AI option can help improve the output of the agent. Using both together provides a sample of an input and expected response (the **AI Message**) for the model to follow. Select one of these input types: - **Text**: Enter a sample user input as a text **Message**. - **Image (Binary)**: Select a binary input from a previous node. Enter the **Image Data Field Name** to identify which binary field from the previous node contains the image data. - **Image (URL)**: Use this option to feed an image in from a URL. Enter the **Image URL**. For both the **Image** types, select the **Image Details** to control how the model processes the image and generates its textual understanding. Choose from: - **Auto**: The model uses the auto setting, which looks at the image input size and decide if it should use the Low or High setting. - **Low**: The model receives a low-resolution 512px x 512px version of the image and represents the image with a budget of 65 tokens. This allows the API to return faster responses and consume fewer input tokens. Use this option for use cases that don't require high detail. - **High**: The model can access the low-resolution image and then creates detailed crops of input images as 512px squares based on the input image size. Each of the detailed crops uses twice the token budget (65 tokens) for a total of 129 tokens. Use this option for use cases that require high detail. ## Templates and examples **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) **Respond to WhatsApp Messages with AI Like a Pro!** by Jimleuk [View template details](https://n8n.io/workflows/2466-respond-to-whatsapp-messages-with-ai-like-a-pro/) **⚡AI-Powered YouTube Video Summarization & Analysis** by Joseph LePage [View template details](https://n8n.io/workflows/2679-ai-powered-youtube-video-summarization-and-analysis/) [Browse Basic LLM Chain integration templates](https://n8n.io/integrations/basic-llm-chain/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on Basic LLM Chains](https://js.langchain.com/docs/tutorials/llm_chain/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues Here are some common errors and issues with the Basic LLM Chain node and steps to resolve or troubleshoot them. ### No prompt specified error This error displays when the **Prompt** is empty or invalid. You might see this error in one of two scenarios: 1. When you've set the **Prompt** to **Define below** and haven't entered anything in the **Text** field. - To resolve, enter a valid prompt in the **Text** field. 1. When you've set the **Prompt** to **Connected Chat Trigger Node** and the incoming data has no field called `chatInput`. - The node expects the `chatInput` field. If your previous node doesn't have this field, add an [Edit Fields (Set)](../../../core-nodes/n8n-nodes-base.set/) node to edit an incoming field name to `chatInput`. # Summarization Chain node Use the Summarization Chain node to summarize multiple documents. On this page, you'll find the node parameters for the Summarization Chain node, and links to more resources. ## Node parameters Choose the type of data you need to summarize in **Data to Summarize**. The data type you choose determines the other node parameters. - **Use Node Input (JSON)** and **Use Node Input (Binary)**: summarize the data coming into the node from the workflow. - You can configure the **Chunking Strategy**: choose what strategy to use to define the data chunk sizes. - If you choose **Simple (Define Below)** you can then set **Characters Per Chunk** and **Chunk Overlap (Characters)**. - Choose **Advanced** if you want to connect a splitter sub-node that provides more configuration options. - **Use Document Loader**: summarize data provided by a document loader sub-node. ## Node Options You can configure the summarization method and prompts. Select **Add Option** > **Summarization Method and Prompts**. Options in **Summarization Method**: - **Map Reduce**: this is the recommended option. Learn more about [Map Reduce](https://js.langchain.com/v0.1/docs/modules/chains/document/map_reduce/) in the LangChain documentation. - **Refine**: learn more about [Refine](https://js.langchain.com/v0.1/docs/modules/chains/document/refine/) in the LangChain documentation. - **Stuff**: learn more about [Stuff](https://js.langchain.com/v0.1/docs/modules/chains/document/stuff/) in the LangChain documentation. You can customize the **Individual Summary Prompts** and the **Final Prompt to Combine**. There are examples in the node. You must include the `"{text}"` placeholder. ## Templates and examples **Scrape and summarize webpages with AI** by n8n Team [View template details](https://n8n.io/workflows/1951-scrape-and-summarize-webpages-with-ai/) **⚡AI-Powered YouTube Video Summarization & Analysis** by Joseph LePage [View template details](https://n8n.io/workflows/2679-ai-powered-youtube-video-summarization-and-analysis/) **AI Automated HR Workflow for CV Analysis and Candidate Evaluation** by Davide Boizza [View template details](https://n8n.io/workflows/2860-ai-automated-hr-workflow-for-cv-analysis-and-candidate-evaluation/) [Browse Summarization Chain integration templates](https://n8n.io/integrations/summarization-chain/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on summarization](https://js.langchain.com/docs/tutorials/summarization/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # LangChain Code node Use the LangChain Code node to import LangChain. This means if there is functionality you need that n8n hasn't created a node for, you can still use it. By configuring the LangChain Code node connectors you can use it as a normal node, root node or sub-node. On this page, you'll find the node parameters, guidance on configuring the node, and links to more resources. Not available on Cloud This node is only available on self-hosted n8n. ## Node parameters ### Add Code Add your custom code. Choose either **Execute** or **Supply Data** mode. You can only use one mode. Unlike the [Code node](../../../core-nodes/n8n-nodes-base.code/), the LangChain Code node doesn't support Python. - **Execute**: use the LangChain Code node like n8n's own Code node. This takes input data from the workflow, processes it, and returns it as the node output. This mode requires a main input and output. You must create these connections in **Inputs** and **Outputs**. - **Supply Data**: use the LangChain Code node as a sub-node, sending data to a root node. This uses an output other than main. By default, you can't load built-in or external modules in this node. Self-hosted users can [enable built-in and external modules](../../../../../hosting/configuration/configuration-methods/). ### Inputs Choose the input types. The main input is the normal connector found in all n8n workflows. If you have a main input and output set in the node, **Execute** code is required. ### Outputs Choose the output types. The main output is the normal connector found in all n8n workflows. If you have a main input and output set in the node, **Execute** code is required. ## Node inputs and outputs configuration By configuring the LangChain Code node connectors (inputs and outputs) you can use it as an app node, root node or sub-node. | Node type | Inputs | Outputs | Code mode | | ------------------------------------------------------------------------------- | ----------------------------- | ------------------------------------------------------------------------- | ----------- | | App node. Similar to the [Code node](../../../core-nodes/n8n-nodes-base.code/). | Main | Main | Execute | | Root node | Main; at least one other type | Main | Execute | | Sub-node | - | A type other than main. Must match the input type you want to connect to. | Supply Data | | Sub-node with sub-nodes | A type other than main | A type other than main. Must match the input type you want to connect to. | Supply Data | ## Built-in methods n8n provides these methods to make it easier to perform common tasks in the LangChain Code node. | Method | Description | | ---------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `this.addInputData(inputName, data)` | Populate the data of a specified non-main input. Useful for mocking data. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../../../data/data-structure/) for information on the data structure expected by n8n. | | `this.addOutputData(outputName, data)` | Populate the data of a specified non-main output. Useful for mocking data. - `outputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `data` contains the data you want to add. Refer to [Data structure](../../../../../data/data-structure/) for information on the data structure expected by n8n. | | `this.getInputConnectionData(inputName, itemIndex, inputIndex?)` | Get data from a specified non-main input. - `inputName` is the input connection type, and must be one of: `ai_agent`, `ai_chain`, `ai_document`, `ai_embedding`, `ai_languageModel`, `ai_memory`, `ai_outputParser`, `ai_retriever`, `ai_textSplitter`, `ai_tool`, `ai_vectorRetriever`, `ai_vectorStore` - `itemIndex` should always be `0` (this parameter will be used in upcoming functionality) - Use `inputIndex` if there is more than one node connected to the specified input. | | `this.getInputData(inputIndex?, inputName?)` | Get data from the main input. | | `this.getNode()` | Get the current node. | | `this.getNodeOutputs()` | Get the outputs of the current node. | | `this.getExecutionCancelSignal()` | Use this to stop the execution of a function when the workflow stops. In most cases n8n handles this, but you may need to use it if building your own chains or agents. It replaces the [Cancelling a running LLMChain](https://js.langchain.com/docs/modules/chains/foundational/llm_chain#cancelling-a-running-llmchain) code that you'd use if building a LangChain application normally. | ## Templates and examples **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) **Custom LangChain agent written in JavaScript** by n8n Team [View template details](https://n8n.io/workflows/1955-custom-langchain-agent-written-in-javascript/) **Use any LangChain module in n8n (with the LangChain code node)** by David Roberts [View template details](https://n8n.io/workflows/2082-use-any-langchain-module-in-n8n-with-the-langchain-code-node/) [Browse LangChain Code integration templates](https://n8n.io/integrations/langchain-code/), or [search all templates](https://n8n.io/workflows/) ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Information Extractor node Use the Information Extractor node to extract structured information from incoming data. On this page, you'll find the node parameters for the Information Extractor node, and links to more resources. ## Node parameters - **Text** defines the input text to extract information from. This is usually an expression that references a field from the input items. For example, this could be `{{ $json.chatInput }}` if the input is a chat trigger, or `{{ $json.text }}` if a previous node is Extract from PDF. - Use **Schema Type** to choose how you want to describe the desired output data format. You can choose between: - **From Attribute Descriptions**: This option allows you to define the schema by specifying the list of attributes and their descriptions. - **Generate From JSON Example**: Input an example JSON object to automatically generate the schema. The node uses the object property types and names. It ignores the actual values. n8n treats every field as mandatory when generating schemas from JSON examples. - **Define using JSON Schema**: Manually input the JSON schema. Read the JSON Schema [guides and examples](https://json-schema.org/learn/miscellaneous-examples) for help creating a valid JSON schema. ## Node options - **System Prompt Template**: Use this option to change the system prompt that's used for the information extraction. n8n automatically appends format specification instructions to the prompt. ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Sentiment Analysis node Use the Sentiment Analysis node to analyze the sentiment of incoming text data. The language model uses the [**Sentiment Categories**](#node-options) in the node options to determine each item's sentiment. ## Node parameters - **Text to Analyze** defines the input text for sentiment analysis. This is an expression that references a field from the input items. For example, this could be `{{ $json.chatInput }}` if the input is from a chat or message source. By default, it expects a `text` field. ## Node options - **Sentiment Categories**: Define the categories that you want to classify your input as. - By default, these are `Positive, Neutral, Negative`. You can customize these categories to fit your specific use case, such as `Very Positive, Positive, Neutral, Negative, Very Negative` for more granular analysis. - **Include Detailed Results**: When turned on, this option includes sentiment strength and confidence scores in the output. Note that these scores are estimates generated by the language model and are rough indicators rather than precise measurements. - **System Prompt Template**: Use this option to change the system prompt that's used for the sentiment analysis. It uses the `{categories}` placeholder for the categories. - **Enable Auto-Fixing**: When enabled, the node automatically fixes model outputs to ensure they match the expected format. Do this by sending the schema parsing error to the LLM and asking it to fix it. ## Usage Notes ### Model Temperature Setting It's strongly advised to set the temperature of the connected language model to 0 or a value close to 0. This helps ensure that the results are as deterministic as possible, providing more consistent and reliable sentiment analysis across multiple runs. ### Language Considerations The node's performance may vary depending on the language of the input text. For best results, ensure your chosen language model supports the input language. ### Processing Large Volumes When analyzing large amounts of text, consider splitting the input into smaller chunks to optimize processing time and resource usage. ### Iterative Refinement For complex sentiment analysis tasks, you may need to iteratively refine the system prompt and categories to achieve the desired results. ## Example Usage ### Basic Sentiment Analysis 1. Connect a data source (for example, RSS Feed, HTTP Request) to the Sentiment Analysis node. 1. Set the "Text to Analyze" field to the relevant item property (for example, `{{ $json.content }}` for blog post content). 1. Keep the default sentiment categories. 1. Connect the node's outputs to separate paths for processing positive, neutral, and negative sentiments differently. ### Custom Category Analysis 1. Change the **Sentiment Categories** to `Excited, Happy, Neutral, Disappointed, Angry`. 1. Adjust your workflow to handle these five output categories. 1. Use this setup to analyze customer feedback with more nuanced emotional categories. ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Text Classifier node Use the Text Classifier node to classify (categorize) incoming data. Using the categories provided in the parameters (see below), each item is passed to the model to determine its category. On this page, you'll find the node parameters for the Text Classifier node, and links to more resources. ## Node parameters - **Input Prompt** defines the input to classify. This is usually an expression that references a field from the input items. For example, this could be `{{ $json.chatInput }}` if the input is a chat trigger. By default it references the `text` field. - **Categories**: Add the categories that you want to classify your input as. Categories have a name and a description. Use the description to tell the model what the category means. This is important if the meaning isn't obvious. You can add as many categories as you like. ## Node options - **Allow Multiple Classes To Be True**: You can configure the classifier to always output a single class per item (turned off), or allow the model to select multiple classes (turned on). - **When No Clear Match**: Define what happens if the model can't find a good match for an item. There are two options: - **Discard Item** (the default): If the node doesn't detect any of the categories, it drops the item. - **Output on Extra, 'Other' Branch**: Creates a separate output branch called **Other**. When the node doesn't detect any of the categories, it outputs items in this branch. - **System Prompt Template**: Use this option to change the system prompt that's used for the classification. It uses the `{categories}` placeholder for the categories. - **Enable Auto-Fixing**: When enabled, the node automatically fixes model outputs to ensure they match the expected format. Do this by sending the schema parsing error to the LLM and asking it to fix it. ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Azure AI Search Vector Store node Azure AI Search (formerly Azure Cognitive Search) is a cloud search service with vector search capabilities for RAG and semantic search applications. Use this node to store, retrieve, and query vector embeddings alongside their content and metadata. On this page, you'll find the node parameters for the Azure AI Search Vector Store node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/azureaisearch/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Prerequisites Before using this node, you need: 1. An [Azure subscription](https://azure.microsoft.com) 1. An [Azure AI Search service](https://learn.microsoft.com/azure/search/search-create-service-portal) 1. API key authentication configured (admin key for write operations, query key for read-only) See [credentials documentation](../../../credentials/azureaisearch/) for setup instructions. ### Index configuration The node automatically creates indexes if they don't exist. When auto-creating, the node configures: - Vector fields with appropriate dimensions based on your embeddings model - HNSW algorithm for efficient similarity search with cosine metric - Content and metadata fields for filtering and retrieval You can also pre-create indexes in Azure Portal for custom configurations. Example schema: ``` { "name": "n8n-vectorstore", "fields": [ { "name": "id", "type": "Edm.String", "key": true, "filterable": true }, { "name": "content", "type": "Edm.String", "searchable": true }, { "name": "content_vector", "type": "Collection(Edm.Single)", "searchable": true, "vectorSearchDimensions": 1536, "vectorSearchProfileName": "n8n-vector-profile" }, { "name": "metadata", "type": "Edm.String", "filterable": true } ], "vectorSearch": { "profiles": [ { "name": "n8n-vector-profile", "algorithm": "n8n-vector-algorithm" } ], "algorithms": [ { "name": "n8n-vector-algorithm", "kind": "hnsw", "hnswParameters": { "metric": "cosine", "m": 4, "efConstruction": 400, "efSearch": 500 } } ] } } ``` Vector dimensions The `vectorSearchDimensions` value must match your embeddings model output. ## Node usage patterns ### Use as a regular node to insert and retrieve documents Use the node directly in workflows to insert or retrieve documents without an agent. See [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) for an example pattern (uses Supabase, but the pattern is identical). ### Connect directly to an AI agent as a tool Connect to an [AI agent's](../n8n-nodes-langchain.agent/) tool connector to use the vector store as a searchable knowledge base: AI agent (tools connector) → Azure AI Search Vector Store node ### Use a retriever to fetch documents Use with [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) and [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) for retrieval-augmented generation: Question and Answer Chain (Retriever) → Vector Store Retriever (Vector Store) → Azure AI Search Vector Store See [this example workflow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/). ### Use the Vector Store Question Answer Tool Use [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize and answer questions: AI agent (tools) → Vector Store Question Answer Tool (Vector Store) → Azure AI Search Vector Store See [this example](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/). ## Node parameters This Vector Store node has five modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, **Retrieve Documents (As Tool for AI Agent)**, and **Update Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use Insert Documents mode to insert new documents into your vector database. #### Retrieve Documents (As Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (As Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. #### Update Documents Use Update Documents mode to update documents in a vector database by ID. Fill in the **ID** with the ID of the embedding entry to update. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. Azure AI Search semantic reranking Azure AI Search has built-in [semantic reranking](https://learn.microsoft.com/azure/search/semantic-search-overview) available when you use **Semantic Hybrid** query mode with a semantic configuration. To use it: 1. Set **Query Mode** to **Semantic Hybrid** in Options 1. Set **Semantic Configuration** to your configuration name (defaults to `semantic-search-config` if not specified) The built-in semantic reranker uses machine learning models to improve relevance. You can chain an additional reranking node after semantic reranking for further refinement. [Semantic reranking](https://learn.microsoft.com/azure/search/semantic-search-overview) is only available if your index has a semantic configuration defined. ### Get Many parameters - **Endpoint**: Your Azure AI Search endpoint (format: `https://your-service.search.windows.net`) - **Index Name**: The index to query - **Limit**: Maximum documents to return (default: 4) ### Insert Documents parameters - **Endpoint**: Your Azure AI Search endpoint - **Index Name**: The index to use (created automatically if it doesn't exist) - **Batch Size**: Number of documents uploaded per batch to Azure AI Search. Adjust based on document size and your service tier limits. This controls upload batching only—embedding generation batching is configured in embedding nodes. ### Update Documents parameters - **Endpoint**: Your Azure AI Search endpoint - **Index Name**: The index to update ### Retrieve Documents parameters (As Vector Store for Chain/Tool) - **Endpoint**: Your Azure AI Search endpoint - **Index Name**: The index to query ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: Tool name shown to the LLM - **Description**: Explain to the LLM what this tool does. Be specific to help the LLM choose when to use this tool. - **Endpoint**: Your Azure AI Search endpoint - **Index Name**: The index to query - **Limit**: Maximum results to retrieve (e.g., `10` for ten best matches) ## Node options ### Options - **Filter**: [OData filter expression](https://learn.microsoft.com/azure/search/search-query-odata-filter) to filter results by document fields or metadata. See filter examples below. - **Query Mode**: Search strategy to use: - **Vector**: Similarity search using embeddings only - **Keyword**: Full-text search using BM25 ranking - **Hybrid** (default): Combines vector and keyword search with Reciprocal Rank Fusion (RRF) - **Semantic Hybrid**: Hybrid search with [semantic reranking](https://learn.microsoft.com/azure/search/semantic-search-overview) for improved relevance - **Semantic Configuration**: Name of the semantic configuration to use for [semantic ranking](https://learn.microsoft.com/azure/search/semantic-search-overview). Defaults to `semantic-search-config` if not specified. Only required if you pre-created an index with a custom semantic configuration name. Query mode selection Use **Vector** for semantic similarity, **Keyword** for exact term matching, **Hybrid** for balanced results, or **Semantic Hybrid** when you've configured semantic search in your index for maximum relevance. ### OData filter examples Azure AI Search uses [OData syntax](https://learn.microsoft.com/azure/search/search-query-odata-filter) for filtering. Metadata fields are accessed using `metadata/fieldName` format. **Filter by document ID:** ``` id eq '3da6491a-f930-4a4e-9471-c05dcd450ba0' ``` **Filter by metadata field:** ``` metadata/source eq 'user-guide' ``` **Complex AND filter:** ``` metadata/category eq 'technology' and metadata/author eq 'John' ``` **Complex OR filter:** ``` metadata/source eq 'user-guide' or metadata/rating ge 4 ``` **Numeric comparison:** ``` metadata/rating ge 4 and metadata/rating lt 10 ``` **String matching with NOT:** ``` metadata/category eq 'technology' and metadata/title ne 'Deprecated' ``` **Supported OData operators:** - Comparison: `eq`, `ne`, `gt`, `ge`, `lt`, `le` - Logical: `and`, `or`, `not` - String functions: `startswith()`, `endswith()`, `contains()` - Collection functions: `any()`, `all()` Filter format Filters work across all query modes (Vector, Keyword, Hybrid, Semantic Hybrid) and all operation modes (retrieve, load, retrieve-as-tool). ## Azure AI Search specific features ### Hybrid search with RRF Azure AI Search's hybrid search uses Reciprocal Rank Fusion to merge vector and keyword results, providing better accuracy than either method alone. ### [Semantic ranking](https://learn.microsoft.com/azure/search/semantic-search-overview) Semantic Hybrid mode applies machine learning models to rerank results based on semantic understanding of your query. This requires a semantic configuration in your index. ### OData filters Use OData syntax to filter by document fields or metadata before vector search executes. This improves performance and precision when you need results from specific sources or with certain attributes. ### HNSW algorithm Azure AI Search uses Hierarchical Navigable Small World (HNSW) graphs for approximate nearest neighbor search, providing fast retrieval at scale with configurable accuracy/speed tradeoffs. ## Troubleshooting ### Index issues **Index not found**: Verify the index name is correct (case-sensitive) and exists in your Azure AI Search service. If using auto-creation, check that the index was created successfully. **Vector dimension mismatch**: Ensure your embedding model dimensions match the index vector field dimensions. Check the index schema to confirm the `vectorSearchDimensions` setting. **Document insert failures**: - Verify write permissions (admin API key required) - Check document fields match your index schema - Ensure required fields are provided in documents - Review batch size settings if experiencing timeouts with large document sets ### Filter issues **Filter not working**: - Verify OData syntax is correct - Ensure metadata fields use `metadata/` prefix: `metadata/source eq 'value'` - Check that filtered fields are marked as `filterable` in your index schema - Test with simple filters first (`id eq 'value'`) before complex expressions **Invalid OData syntax**: - Use single quotes for string values: `metadata/source eq 'value'` - Use proper operators: `eq`, `ne`, `gt`, `ge`, `lt`, `le`, `and`, `or`, `not` - Refer to [OData filter documentation](https://learn.microsoft.com/azure/search/search-query-odata-filter) for syntax details ### Connection issues **Unable to connect**: - Verify endpoint URL format: `https://your-service.search.windows.net` - Confirm your Azure AI Search service is running and accessible - Check network security groups, firewall rules, and private endpoint configurations - For Azure-hosted n8n, verify virtual network peering or service endpoint configuration if using private endpoints ### Authentication issues For authentication troubleshooting including API key errors, refer to the [credentials documentation troubleshooting section](../../../credentials/azureaisearch/#troubleshooting). ## Templates and examples **Build an AI IT Support Agent with Azure Search, Entra ID & Jira** by Adam Bertram [View template details](https://n8n.io/workflows/4560-build-an-ai-it-support-agent-with-azure-search-entra-id-and-jira/) **💾 Generate Blog Posts on Autopilot with GPT‑5, Tavily and WordPress** by N8ner [View template details](https://n8n.io/workflows/12858-generate-blog-posts-on-autopilot-with-gpt5-tavily-and-wordpress/) **Find Valid Vouchers and Promo Codes with SerpAPI, Decodo, and GPT-5 Mini** by Khaisa Studio [View template details](https://n8n.io/workflows/8075-find-valid-vouchers-and-promo-codes-with-serpapi-decodo-and-gpt-5-mini/) [Browse Azure AI Search Vector Store integration templates](https://n8n.io/integrations/azure-ai-search-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources - [Azure AI Search Vector Search documentation](https://learn.microsoft.com/azure/search/vector-search-overview) - [LangChain Azure AI Search integration](https://js.langchain.com/docs/integrations/vectorstores/azure_aisearch) - [Azure AI Search REST API reference](https://learn.microsoft.com/rest/api/searchservice/) - [OData filter syntax for Azure AI Search](https://learn.microsoft.com/azure/search/search-query-odata-filter) View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Chroma Vector Store node Use the Chroma node to interact with your Chroma database as [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect directly to an [agent](../../../../../glossary/#ai-agent) as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Chroma node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/chroma/). ## Node usage patterns You can use the Chroma Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the Chroma Vector Store as a regular node to insert or get documents. This pattern places the Chroma Vector Store in the regular connection flow without using an agent. ### Connect directly to an AI agent as a tool You can connect the Chroma Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Chroma Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Chroma Vector Store node to fetch documents from the Chroma Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An example of the connection flow would be as follows: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Chroma Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Chroma Vector Store node. Rather than connecting the Chroma Vector Store directly as a tool, this pattern uses a tool specifically designed to summarize data in the vector store. The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Chroma Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Chroma collection name**: Select your collection from the fetched collections list. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `5` to get the five best results. This Operation Mode includes one **Node option**, the Metadata Filter ### Insert Documents parameters - **Chroma collection name**: Select your collection from the fetched collections list. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Chroma collection name**: Select your collection from the fetched collections list. This Operation Mode includes one **Node option**, the Metadata Filter ### Retrieve Documents (As Tool for AI Agent) parameters - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Chroma collection name**: Select your collection from the fetched collections list. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `5` to get the five best results. ## Node options ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ## Related resources Refer to [LangChain's Chroma documentation](https://js.langchain.com/oss/javascript/integrations/vectorstores/chroma) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Simple Vector Store node Use the Simple Vector Store node to store and retrieve [embeddings](../../../../../glossary/#ai-embedding) in n8n's in-app memory. On this page, you'll find the node parameters for the Simple Vector Store node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. This node is different from AI memory nodes The simple vector storage described here is different to the AI memory nodes such as [Simple Memory](../../sub-nodes/n8n-nodes-langchain.memorybufferwindow/). This node creates a [vector database](../../../../../glossary/#ai-vector-store) in the app memory. ## Data safety limitations Before using the Simple Vector Store node, it's important to understand its limitations and how it works. Warning n8n recommends using Simple Vector store for development use only. ### Vector store data isn't persistent This node stores data in memory only. All data is lost when n8n restarts and may also be purged in low-memory conditions. ### All instance users can access vector store data Memory keys for the Simple Vector Store node are global, not scoped to individual workflows. This means that all users of the instance can access vector store data by adding a Simple Vector Store node and selecting the memory key, regardless of the access controls set for the original workflow. Take care not to expose sensitive information when ingesting data with the Simple Vector Store node. ## Node usage patterns You can use the Simple Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the Simple Vector Store as a regular node to insert or get documents. This pattern places the Simple Vector Store in the regular connection flow without using an agent. You can see an example of in step 2 of [this template](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/). ### Connect directly to an AI agent as a tool You can connect the Simple Vector Store node directly to the [tool](../../../../../glossary/#ai-tool) connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Simple Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Simple Vector Store node to fetch documents from the Simple Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Simple Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Simple Vector Store node. Rather than connecting the Simple Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Simple Vector store. ## Memory Management The Simple Vector Store implements memory management to prevent excessive memory usage: - Automatically cleans up old vector stores when memory pressure increases - Removes inactive stores that haven't been accessed for a configurable amount of time ### Configuration Options You can control memory usage with these environment variables: | Variable | Type | Default | Description | | ----------------------------- | ------ | ------- | ----------------------------------------------------------------------------------- | | `N8N_VECTOR_STORE_MAX_MEMORY` | Number | -1 | Maximum memory in MB allowed for all vector stores combined (-1 to disable limits). | | `N8N_VECTOR_STORE_TTL_HOURS` | Number | -1 | Hours of inactivity after which a store gets removed (-1 to disable TTL). | On n8n Cloud, these values are preset to 100MB (about 8,000 documents, depending on document size and metadata) and 7 days respectively. For self-hosted instances, both values default to -1(no memory limits or time-based cleanup). ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Memory Key**: Select or create the key containing the vector memory you want to query. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Memory Key**: Select or create the key you want to store the vector memory as. - **Clear Store**: Use this parameter to control whether to wipe the vector store for the given memory key for this workflow before inserting data (turned on). ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Memory Key**: Select or create the key containing the vector memory you want to query. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Memory Key**: Select or create the key containing the vector memory you want to query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **RAG Chatbot for Company Documents using Google Drive and Gemini** by Mihai Farcas [View template details](https://n8n.io/workflows/2753-rag-chatbot-for-company-documents-using-google-drive-and-gemini/) **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) [Browse Simple Vector Store integration templates](https://n8n.io/integrations/in-memory-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Memory Vector Store documentation](https://js.langchain.com/docs/integrations/vectorstores/memory/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Milvus Vector Store node Use the Milvus node to interact with your Milvus database as [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect directly to an [agent](../../../../../glossary/#ai-agent) as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Milvus node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/milvus/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the Milvus Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the Milvus Vector Store as a regular node to insert, or get documents. This pattern places the Milvus Vector Store in the regular connection flow without using an agent. See this [example template](https://n8n.io/workflows/3573-create-a-rag-system-with-paul-essays-milvus-and-openai-for-cited-answers/) for how to build a system that stores documents in Milvus and retrieves them to support cited, chat-based answers. ### Connect directly to an AI agent as a tool You can connect the Milvus Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Milvus Vector Store node. See this [example template](https://n8n.io/workflows/3576-paul-graham-essay-search-and-chat-with-milvus-vector-database/) where data is embedded and indexed in Milvus, and the AI Agent uses the vector store as a knowledge tool for question-answering. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Milvus Vector Store node to fetch documents from the Milvus Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. A typical node connection flow looks like this: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Milvus Vector Store. Check out this [workflow example](https://n8n.io/workflows/3574-create-a-paul-graham-essay-qanda-system-with-openai-and-milvus-vector-database/) to see how to ingest external data into Milvus and build a chat-based semantic Q&A system. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Milvus Vector Store node. Rather than connecting the Milvus Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The connections flow would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Milvus Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Milvus Collection**: Select or enter the Milvus Collection to use. - **Prompt**: Enter your search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Milvus Collection**: Select or enter the Milvus Collection to use. - **Clear Collection**: Specify whether to clear the collection before inserting new documents. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Milvus collection**: Select or enter the Milvus Collection to use. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Milvus Collection**: Select or enter the Milvus Collection to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Node options ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ### Clear Collection Available in **Insert Documents** mode. Deletes all data from the collection before inserting the new data. ## Related resources Refer to [LangChain's Milvus documentation](https://js.langchain.com/docs/integrations/vectorstores/milvus/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # MongoDB Atlas Vector Store node MongoDB Atlas Vector Search is a feature of MongoDB Atlas that enables users to store and query vector embeddings. Use this node to interact with Vector Search indexes in your MongoDB Atlas collections. You can insert documents, retrieve documents, and use the vector store in chains or as a tool for agents. On this page, you'll find the node parameters for the MongoDB Atlas Vector Store node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/mongodb/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Prerequisites Before using this node, create a [Vector Search index](https://www.mongodb.com/docs/atlas/atlas-vector-search/vector-search-type/) in your MongoDB Atlas collection. Follow these steps to create one: 1. Log in to the [MongoDB Atlas dashboard](https://cloud.mongodb.com/). 1. Select your organization and project. 1. Find "Search & Vector Search" section. 1. Select your cluster and click "Go to search". 1. Click "Create Search Index". 1. Choose "Vector Search" mode and use the visual or JSON editors. For example: ``` { "fields": [ { "type": "vector", "path": "", "numDimensions": 1536, // any other value "similarity": "" } ] } ``` 1. Adjust the "dimensions" value according to your embedding model (For example, `1536` for OpenAI's `text-embedding-small-3`). 1. Name your index and create. Make sure to note the following values which are required when configuring the node: - Collection name - Vector index name - Field names for embeddings and metadata ## Node usage patterns You can use the MongoDB Atlas Vector Store node in the following patterns: ### Use as a regular node to insert and retrieve documents You can use the MongoDB Atlas Vector Store as a regular node to insert or get documents. This pattern places the MongoDB Atlas Vector Store in the regular connection flow without using an agent. You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the template uses the Supabase Vector Store, but the pattern is the same). ### Connect directly to an AI agent as a tool You can connect the MongoDB Atlas Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use the vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> MongoDB Atlas Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the MongoDB Atlas Vector Store node to fetch documents from the MongoDB Atlas Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> MongoDB Atlas Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the MongoDB Atlas Vector Store node. Rather than connecting the MongoDB Atlas Vector Store directly as a tool, this pattern uses a tool specifically designed to summarize data in the vector store. The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) (the linked example uses the In-Memory Vector Store, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> In-Memory Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Mongo Collection**: Enter the name of the MongoDB collection to use. - **Vector Index Name**: Enter the name of the Vector Search index in your MongoDB Atlas collection. - **Embedding Field**: Enter the field name in your documents that contains the vector embeddings. - **Metadata Field**: Enter the field name in your documents that contains the text metadata. ### Insert Documents parameters - **Mongo Collection**: Enter the name of the MongoDB collection to use. - **Vector Index Name**: Enter the name of the Vector Search index in your MongoDB Atlas collection. - **Embedding Field**: Enter the field name in your documents that contains the vector embeddings. - **Metadata Field**: Enter the field name in your documents that contains the text metadata. ### Retrieve Documents parameters (As Vector Store for Chain/Tool) - **Mongo Collection**: Enter the name of the MongoDB collection to use. - **Vector Index Name**: Enter the name of the Vector Search index in your MongoDB Atlas collection. - **Embedding Field**: Enter the field name in your documents that contains the vector embeddings. - **Metadata Field**: Enter the field name in your documents that contains the text metadata. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Mongo Collection**: Enter the name of the MongoDB collection to use. - **Vector Index Name**: Enter the name of the Vector Search index in your MongoDB Atlas collection. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Node options ### Options - **Metadata Filter**: Filters results based on metadata. ## Templates and examples **AI-Powered WhatsApp Chatbot for Text, Voice, Images, and PDF with RAG** by NovaNode [View template details](https://n8n.io/workflows/4827-ai-powered-whatsapp-chatbot-for-text-voice-images-and-pdf-with-rag/) **Build a Knowledge Base Chatbot with OpenAI, RAG and MongoDB Vector Embeddings** by NovaNode [View template details](https://n8n.io/workflows/4526-build-a-knowledge-base-chatbot-with-openai-rag-and-mongodb-vector-embeddings/) **Build a Chatbot with Reinforced Learning Human Feedback (RLHF) and RAG** by NovaNode [View template details](https://n8n.io/workflows/4689-build-a-chatbot-with-reinforced-learning-human-feedback-rlhf-and-rag/) [Browse MongoDB Atlas Vector Store integration templates](https://n8n.io/integrations/mongodb-atlas-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to: - [LangChain's MongoDB Atlas Vector Search documentation](https://js.langchain.com/docs/integrations/vectorstores/mongodb_atlas) for more information about the service. - [MongoDB Atlas Vector Search documentation](https://www.mongodb.com/docs/atlas/atlas-vector-search/) for more information about MongoDB Atlas Vector Search. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # PGVector Vector Store node PGVector is an extension of Postgresql. Use this node to interact with the PGVector tables in your Postgresql database. You can insert documents into a vector table, get documents from a vector table, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect directly to an [agent](../../../../../glossary/#ai-agent) as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the PGVector node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/postgres/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the PGVector Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the PGVector Vector Store as a regular node to insert or get documents. This pattern places the PGVector Vector Store in the regular connection flow without using an agent. You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the template uses the Supabase Vector Store, but the pattern is the same). ### Connect directly to an AI agent as a tool You can connect the PGVector Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> PGVector Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the PGVector Vector Store node to fetch documents from the PGVector Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> PGVector Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the PGVector Vector Store node. Rather than connecting the PGVector Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) (the linked example uses the Simple Vector Store, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Simple Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Table name**: Enter the name of the table you want to query. - **Prompt**: Enter your search query. - **Limit**: Enter a number to set how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Table name**: Enter the name of the table you want to query. ### Retrieve Documents parameters (As Vector Store for Chain/Tool) - **Table name**: Enter the name of the table you want to query. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Table Name**: Enter the PGVector table to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Node options ### Collection A way to separate datasets in PGVector. This creates a separate table and column to keep track of which collection a vector belongs to. - **Use Collection**: Select whether to use a collection (turned on) or not (turned off). - **Collection Name**: Enter the name of the collection you want to use. - **Collection Table Name**: Enter the name of the table to store collection information in. ### Column Names The following options specify the names of the columns to store the vectors and corresponding information in: - **ID Column Name** - **Vector Column Name** - **Content Column Name** - **Metadata Column Name** ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ## Templates and examples **HR & IT Helpdesk Chatbot with Audio Transcription** by Felipe Braga [View template details](https://n8n.io/workflows/2752-hr-and-it-helpdesk-chatbot-with-audio-transcription/) **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) **Multi-Platform AI Sales Agent with RAG, CRM Logging & Appointment Booking** by Vansh Arora [View template details](https://n8n.io/workflows/4508-multi-platform-ai-sales-agent-with-rag-crm-logging-and-appointment-booking/) [Browse PGVector Vector Store integration templates](https://n8n.io/integrations/postgres-pgvector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's PGVector documentation](https://js.langchain.com/docs/integrations/vectorstores/pgvector) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Pinecone Vector Store node Use the Pinecone node to interact with your Pinecone database as [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect directly to an [agent](../../../../../glossary/#ai-agent) as a [tool](../../../../../glossary/#ai-tool). You can also update an item in a vector database by its ID. On this page, you'll find the node parameters for the Pinecone node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/pinecone/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the Pinecone Vector Store node in the following patterns. ### Use as a regular node to insert, update, and retrieve documents You can use the Pinecone Vector Store as a regular node to insert, update, or get documents. This pattern places the Pinecone Vector Store in the regular connection flow without using an agent. You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/). ### Connect directly to an AI agent as a tool You can connect the Pinecone Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Pinecone Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Pinecone Vector Store node to fetch documents from the Pinecone Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Pinecone Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Pinecone Vector Store node. Rather than connecting the Pinecone Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2705-chat-with-github-api-documentation-rag-powered-chatbot-with-pinecone-and-openai/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Pinecone Vector store. ## Node parameters ### Operation Mode This Vector Store node has five modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, **Retrieve Documents (As Tool for AI Agent)**, and **Update Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use Insert Documents mode to insert new documents into your vector database. #### Retrieve Documents (As Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (As Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. #### Update Documents Use Update Documents mode to update documents in a vector database by ID. Fill in the **ID** with the ID of the embedding entry to update. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Pinecone Index**: Select or enter the Pinecone Index to use. - **Prompt**: Enter your search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Pinecone Index**: Select or enter the Pinecone Index to use. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Pinecone Index**: Select or enter the Pinecone Index to use. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Pinecone Index**: Select or enter the Pinecone Index to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Parameters for **Update Documents** - ID ## Node options ### Pinecone Namespace Another segregation option for how to store your data within the index. ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ### Clear Namespace Available in **Insert Documents** mode. Deletes all data from the namespace before inserting the new data. ## Templates and examples **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) **RAG Chatbot for Company Documents using Google Drive and Gemini** by Mihai Farcas [View template details](https://n8n.io/workflows/2753-rag-chatbot-for-company-documents-using-google-drive-and-gemini/) [Browse Pinecone Vector Store integration templates](https://n8n.io/integrations/pinecone-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Pinecone documentation](https://js.langchain.com/docs/integrations/vectorstores/pinecone/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ### Find your Pinecone index and namespace Your Pinecone index and namespace are available in your Pinecone account. # Qdrant Vector Store node Use the Qdrant node to interact with your Qdrant collection as a [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain) or connect it directly to an [agent](../../../../../glossary/#ai-agent) to use as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Qdrant node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/qdrant/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the Qdrant Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the Qdrant Vector Store as a regular node to insert or get documents. This pattern places the Qdrant Vector Store in the regular connection flow without using an agent. You can see an example of this in the first part of [this template](https://n8n.io/workflows/2440-building-rag-chatbot-for-movie-recommendations-with-qdrant-and-open-ai/). ### Connect directly to an AI agent as a tool You can connect the Qdrant Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Qdrant Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Qdrant Vector Store node to fetch documents from the Qdrant Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/2183-ai-crew-to-automate-fundamental-stock-analysis-qanda-workflow/) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Qdrant Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Qdrant Vector Store node. Rather than connecting the Qdrant Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2464-scale-deal-flow-with-a-pitch-deck-ai-vision-chatbot-and-qdrant-vector-store/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Qdrant Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Qdrant collection name**: Enter the name of the Qdrant collection to use. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. This Operation Mode includes one **Node option**, the [Metadata Filter](#metadata-filter). ### Insert Documents parameters - **Qdrant collection name**: Enter the name of the Qdrant collection to use. This Operation Mode includes one **Node option**: - **Collection Config**: Enter JSON options for creating a Qdrant collection creation configuration. Refer to the Qdrant [Collections](https://qdrant.tech/documentation/concepts/collections/) documentation for more information. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Qdrant Collection**: Enter the name of the Qdrant collection to use. This Operation Mode includes one **Node option**, the [Metadata Filter](#metadata-filter). ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Qdrant Collection**: Enter the name of the Qdrant collection to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Node options ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ## Templates and examples **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) **AI Voice Chatbot with ElevenLabs & OpenAI for Customer Service and Restaurants** by Davide Boizza [View template details](https://n8n.io/workflows/2846-ai-voice-chatbot-with-elevenlabs-and-openai-for-customer-service-and-restaurants/) **Complete business WhatsApp AI-Powered RAG Chatbot using OpenAI** by Davide Boizza [View template details](https://n8n.io/workflows/2845-complete-business-whatsapp-ai-powered-rag-chatbot-using-openai/) [Browse Qdrant Vector Store integration templates](https://n8n.io/integrations/qdrant-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Qdrant documentation](https://js.langchain.com/docs/integrations/vectorstores/qdrant) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Redis Vector Store node Use the Redis Vector Store node to interact with your Redis database as a [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into the vector database, get documents from the vector database, retrieve documents using a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect it directly to an [agent](../../../../../glossary/#ai-agent) to use as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Redis Vector Store node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/redis/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Prerequisites Before using this node, you need a Redis database with the [Redis Query Engine](https://redis.io/docs/latest/develop/ai/search-and-query/?utm_source=n8n&utm_medium=docs) enabled. Use one of the following: - **Redis Open Source (v8.0 and later)** : includes the Redis Query Engine by default - **[Redis Cloud](https://cloud.redis.io/?utm_source=n8n&utm_medium=docs)** : fully managed service - **[Redis Software](https://redis.io/software/?utm_source=n8n&utm_medium=docs)** : self-managed deployment A new index will be created if you don't have one. Creating your own indices in advance is only necessary if you want to use a custom index schema or reuse an existing index. Otherwise, you can skip this step and let the node create a new index for you based on the options you specify. ## Node usage patterns You can use the Redis Vector Store node in the following patterns: ### Use as a regular node to insert and retrieve documents You can use the Redis Vector Store as a regular node to insert or get documents. This pattern places the Redis Vector Store in the regular connection flow without using an agent. You can see an example in [this template](https://n8n.io/workflows/10887-reduce-llm-costs-with-semantic-caching-using-redis-vector-store-and-huggingface/) where the semantic cache is stored in Redis and retrieved using the Redis Vector Store node in the start of the workflow. ### Connect directly to an AI agent as a tool You can connect the Redis Vector Store node directly to the [tool](../../../../../glossary/#ai-tool) connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Redis Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Redis Vector Store node to fetch documents from the Redis Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the linked example uses Pinecone, but the pattern is the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Redis Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Redis Vector Store node. Rather than connecting the Redis Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. This [template](https://n8n.io/workflows/10837-chat-with-github-issues-using-openai-and-redis-vector-search/) shows how to use the Vector Store Question Answer Tool with the Redis Vector Store node. The connections flow in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Redis Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Redis Index**: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. This Operation Mode includes one **Node option**, the [Metadata Filter](#metadata-filter). ### Insert Documents parameters - **Redis Index**: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Redis Index**: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list. This Operation Mode includes one **Node option**, the [Metadata Filter](#metadata-filter). ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Redis Index**: Enter the name of the Redis vector search index to use. Optionally choose an existing one from the list. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Include Metadata Whether to include document metadata. You can use this with the [Get Many](#get-many-parameters) and [Retrieve Documents (As Tool for AI Agent)](#retrieve-documents-as-tool-for-ai-agent-parameters) modes. ## Node options ### Metadata Filter Metadata filters are available for the [Get Many](#get-many-parameters), [Retrieve Documents (As Vector Store for Chain/Tool)](#retrieve-documents-as-vector-store-for-chaintool-parameters), and [Retrieve Documents (As Tool for AI Agent)](#retrieve-documents-as-tool-for-ai-agent-parameters) operation modes. This is an `OR` query. If you specify more than one metadata filter field, at least one of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ### Redis Configuration Options Available for all operation modes: - **Metadata Key**: Enter the key for the metadata field in the Redis hash (default: `metadata`). - **Key Prefix**: Enter the key prefix for storing documents (default: `doc:`). - **Content Key**: Enter the key for the content field in the Redis hash (default: `content`). - **Embedding Key**: Enter the key for the embedding field in the Redis hash (default: `embedding`). ### Insert Options Available for the [Insert Documents](#insert-documents-parameters) operation mode: - **Overwrite Documents**: Select whether to overwrite existing documents (turned on) or not (turned off). Also deletes the index. - **Time-to-Live**: Enter the time-to-live for documents in seconds. Does not expire the index. ## Templates and examples **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) **🐶 AI Agent for PetShop Appointments (Agente de IA para agendamentos de PetShop)** by Bruno Dias [View template details](https://n8n.io/workflows/2999-ai-agent-for-petshop-appointments-agente-de-ia-para-agendamentos-de-petshop/) **🤖 AI-Powered WhatsApp Assistant for Restaurants & Delivery Automation** by Bruno Dias [View template details](https://n8n.io/workflows/3043-ai-powered-whatsapp-assistant-for-restaurants-and-delivery-automation/) [Browse Redis Vector Store integration templates](https://n8n.io/integrations/redis-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to: - [Redis Vector Search documentation](https://redis.io/docs/latest/develop/ai/search-and-query/vectors/) for more information about Redis vector capabilities. - [RediSearch documentation](https://redis.io/docs/latest/develop/interact/search-and-query/) for more information about RediSearch. - [LangChain's Redis Vector Store documentation](https://js.langchain.com/docs/integrations/vectorstores/redis) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Supabase Vector Store node Use the Supabase Vector Store to interact with your Supabase database as vector store. You can insert documents into a vector database, get many documents from a vector database, and retrieve documents to provide them to a retriever connected to a chain. Use the Supabase Vector Store to interact with your Supabase database as [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect it directly to an [agent](../../../../../glossary/#ai-agent) to use as a [tool](../../../../../glossary/#ai-tool). You can also update an item in a vector store by its ID. On this page, you'll find the node parameters for the Supabase node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/supabase/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. Supabase provides a [quickstart for setting up your vector store](https://supabase.com/docs/guides/ai/langchain?database-method=sql). If you use settings other than the defaults in the quickstart, this may affect parameter settings in n8n. Make sure you understand what you're doing. ## Node usage patterns You can use the Supabase Vector Store node in the following patterns. ### Use as a regular node to insert, update, and retrieve documents You can use the Supabase Vector Store as a regular node to insert, update, or get documents. This pattern places the Supabase Vector Store in the regular connection flow without using an agent. You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/). ### Connect directly to an AI agent as a tool You can connect the Supabase Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Supabase Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Supabase Vector Store node to fetch documents from the Supabase Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Supabase Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Supabase Vector Store node. Rather than connecting the Supabase Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Supabase Vector store. ## Node parameters ### Operation Mode This Vector Store node has five modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, **Retrieve Documents (As Tool for AI Agent)**, and **Update Documents**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt will be embedded and used for similarity search. The node will return the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use Insert Documents mode to insert new documents into your vector database. #### Retrieve Documents (As Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (As Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. #### Update Documents Use Update Documents mode to update documents in a vector database by ID. Fill in the **ID** with the ID of the embedding entry to update. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Get Many parameters - **Table Name**: Enter the Supabase table to use. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Table Name**: Enter the Supabase table to use. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Table Name**: Enter the Supabase table to use. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Table Name**: Enter the Supabase table to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Update Documents - **Table Name**: Enter the Supabase table to use. - **ID**: The ID of an embedding entry. Parameters for **Update Documents** - ID ## Node options ### Query Name The name of the matching function you set up in Supabase. If you follow the [Supabase quickstart](https://supabase.com/docs/guides/ai/langchain?database-method=sql), this will be `match_documents`. ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ## Templates and examples **AI Agent To Chat With Files In Supabase Storage** by Mark Shcherbakov [View template details](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) **🤖 Create a Documentation Expert Bot with RAG, Gemini, and Supabase** by Lucas Peyrin [View template details](https://n8n.io/workflows/5993-create-a-documentation-expert-bot-with-rag-gemini-and-supabase/) [Browse Supabase Vector Store integration templates](https://n8n.io/integrations/supabase-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Supabase documentation](https://js.langchain.com/docs/integrations/vectorstores/supabase/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Weaviate Vector Store node Use the Weaviate node to interact with your Weaviate collection as a [vector store](../../../../../glossary/#ai-vector-store). You can insert documents into or retrieve documents from a vector database. You can also retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain) or connect this node directly to an [agent](../../../../../glossary/#ai-agent) to use as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Weaviate node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/weaviate/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the Weaviate Vector Store node in the following patterns. ### Use as a regular node to insert and retrieve documents You can use the Weaviate Vector Store as a regular node to insert or get documents. This pattern places the Weaviate Vector Store in the regular connection flow without using an agent. ### Connect directly to an AI agent as a tool You can connect the Weaviate Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Weaviate Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Weaviate Vector Store node to fetch documents from the Weaviate Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Weaviate Vector Store node. Rather than connecting the Weaviate Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. ## Node parameters Multitenancy You can separate your data into isolated tenants for the same collection (for example, for different customers). For that, you must always provide a [Tenant Name](#tenant-name) both when inserting and retrieving objects. [Read more about multi tenancy in Weaviate docs](https://docs.weaviate.io/weaviate/manage-collections/multi-tenancy). ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Get Many parameters - **Weaviate Collection**: Enter the name of the Weaviate collection to use. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Insert Documents parameters - **Weaviate Collection**: Enter the name of the Weaviate collection to use. - **Embedding Batch Size**: The number of documents to embed in a single batch. The default is 200 documents. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Weaviate Collection**: Enter the name of the Weaviate collection to use. ### Retrieve Documents (As Tool for AI Agent) parameters - **Weaviate Collection**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Weaviate Collection**: Enter the name of the Weaviate collection to use. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Include Metadata Whether to include document metadata. You can use this with the [Get Many](#get-many) and [Retrieve Documents (As Tool for AI Agent)](#retrieve-documents-as-tool-for-ai-agent-parameters) modes. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ## Node options ### Search Filters Available for the [Get Many](#get-many), [Retrieve Documents (As Vector Store for Chain/Tool)](#retrieve-documents-as-vector-store-for-chaintool), and [Retrieve Documents (As Tool for AI Agent)](#retrieve-documents-as-tool-for-ai-agent) operation modes. When searching for data, use this to match metadata associated with documents. You can learn more about the operators and query structure in [Weaviate's conditional filters documentation](https://docs.weaviate.io/weaviate/api/graphql/filters). You can use both `AND` and `OR` with different operators. Operators are case insensitive: ``` { "OR": [ { "path": ["source"], "operator": "Equal", "valueString": "source1" }, { "path": ["source"], "operator": "Equal", "valueString": "source1" } ] } ``` Supported operators: | Operator | Required Field(s) | Description | | ------------------ | ---------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | | `'equal'` | `valueString` or `valueNumber` | Checks if the property is equal to the given string or number. | | `'like'` | `valueString` | Checks if the string property matches a pattern (for example, sub-string match). | | `'containsAny'` | `valueTextArray` (string[]) | Checks if the property contains **any** of the given values. | | `'containsAll'` | `valueTextArray` (string[]) | Checks if the property contains **all** of the given values. | | `'greaterThan'` | `valueNumber` | Checks if the property value is greater than the given number. | | `'lessThan'` | `valueNumber` | Checks if the property value is less than the given number. | | `'isNull'` | `valueBoolean` (true/false) | Checks if the property is null or not. ([must enable before ingestion](https://docs.weaviate.io/weaviate/manage-collections/collection-operations#set-inverted-index-parameters)) | | `'withinGeoRange'` | `valueGeoCoordinates` (object with geolocation data) | Filters by proximity to geographic coordinates. | When inserting data, the document loader sets the metadata. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ### Metadata Keys You can define which metadata keys you want Weaviate to return on your queries. This can reduce network load, as you will only get properties you have defined. Returns all properties from the server by default. Available for the [Get Many](#get-many), [Retrieve Documents (As Vector Store for Chain/Tool)](#retrieve-documents-as-vector-store-for-chaintool), and [Retrieve Documents (As Tool for AI Agent)](#retrieve-documents-as-tool-for-ai-agent) operation modes. ### Hybrid: Query Text Provide a query text to combine vector search with a keyword/text search. ### Hybrid: Explain Score Whether to show the score fused between hybrid and vector search explanation. ### Hybrid: Fusion Type Select the fusion type for combining vector and keyword search results. [Learn more about fusion algorithms](https://weaviate.io/learn/knowledgecards/fusion-algorithm). Options: - **Relative Score**: Uses relative score fusion - **Ranked**: Uses ranked fusion ### Hybrid: Auto Cut Limit Limit result groups by detecting sudden jumps in score. [Learn more about autocut](https://docs.weaviate.io/weaviate/api/graphql/additional-operators#autocut). ### Hybrid: Alpha Change the relative weights of the keyword and vector components. 1.0 = pure vector, 0.0 = pure keyword. Default is 0.5. [Learn more about the alpha parameter](https://weaviate.io/learn/knowledgecards/alpha-parameter). ### Hybrid: Query Properties Comma-separated list of properties to include in the query with optionally weighted values, e.g., "question^2,answer". [Learn more about setting weights on property values](https://docs.weaviate.io/weaviate/search/hybrid#set-weights-on-property-values). ### Hybrid: Max Vector Distance Set the maximum allowable distance for the vector search component. ### Tenant Name The specific tenant to store or retrieve documents for. [Learn more about multi-tenancy](https://weaviate.io/learn/knowledgecards/multi-tenancy). Must enable at creation You must pass a tenant name at first ingestion to enable multitenancy for a collection. You can't enable or disable multitenancy after creation. ### Text Key The key in the document that contains the embedded text. ### Skip Init Checks Whether to [skip initialization checks](https://docs.weaviate.io/weaviate/client-libraries/typescript/notes-best-practices#initial-connection-checks) when instantiating the client. ### Init Timeout Number of seconds to wait before [timing out](https://docs.weaviate.io/weaviate/client-libraries/typescript/notes-best-practices#timeout-values) during initial checks. ### Insert Timeout Number of seconds to wait before [timing out](https://docs.weaviate.io/weaviate/client-libraries/typescript/notes-best-practices#timeout-values) during inserts. ### Query Timeout Number of seconds to wait before [timing out](https://docs.weaviate.io/weaviate/client-libraries/typescript/notes-best-practices#timeout-values) during queries. ### GRPC Proxy A proxy to use for gRPC requests. ### Clear Data Available for the [Insert Documents](#insert-documents) operation mode. Whether to clear the collection or tenant before inserting new data. ## Templates and examples **Build a Weekly AI Trend Alerter with arXiv and Weaviate** by Mary Newhauser [View template details](https://n8n.io/workflows/5817-build-a-weekly-ai-trend-alerter-with-arxiv-and-weaviate/) **Build person OSINT profiles using Humantic AI, Hunter, CourtListener and GPT-5** by Open Paws [View template details](https://n8n.io/workflows/12507-build-person-osint-profiles-using-humantic-ai-hunter-courtlistener-and-gpt-5/) **Research organizations with GPT‑5, Gemini, CourtListener, LegiScan and OSINT web sources** by Open Paws [View template details](https://n8n.io/workflows/12506-research-organizations-with-gpt5-gemini-courtlistener-legiscan-and-osint-web-sources/) [Browse Weaviate Vector Store integration templates](https://n8n.io/integrations/weaviate-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Weaviate documentation](https://js.langchain.com/docs/integrations/vectorstores/weaviate/) for more information about the service. Refer to [Weaviate Installation](https://docs.weaviate.io/deploy) for a self hosted Weaviate Cluster. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Zep Vector Store node Deprecated This node is deprecated, and will be removed in a future version. Use the Zep Vector Store to interact with Zep vector databases. You can insert documents into a vector database, get documents from a vector database, retrieve documents to provide them to a retriever connected to a [chain](../../../../../glossary/#ai-chain), or connect it directly to an [agent](../../../../../glossary/#ai-agent) to use as a [tool](../../../../../glossary/#ai-tool). On this page, you'll find the node parameters for the Zep Vector Store node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/zep/). Examples and templates For usage examples and templates to help you get started, refer to n8n's [Zep Vector Store integrations](https://n8n.io/integrations/zep-vector-store/) page. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node usage patterns You can use the Zep Vector Store node in the following patterns. ### Use as a regular node to insert, update, and retrieve documents You can use the Zep Vector Store as a regular node to insert or get documents. This pattern places the Zep Vector Store in the regular connection flow without using an agent. You can see an example of this in scenario 1 of [this template](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (the example uses Supabase, but the pattern is the same). ### Connect directly to an AI agent as a tool You can connect the Zep Vector Store node directly to the tool connector of an [AI agent](../n8n-nodes-langchain.agent/) to use a vector store as a resource when answering queries. Here, the connection would be: AI agent (tools connector) -> Zep Vector Store node. ### Use a retriever to fetch documents You can use the [Vector Store Retriever](../../sub-nodes/n8n-nodes-langchain.retrievervectorstore/) node with the Zep Vector Store node to fetch documents from the Zep Vector Store node. This is often used with the [Question and Answer Chain](../n8n-nodes-langchain.chainretrievalqa/) node to fetch documents from the vector store that match the given chat input. An [example of the connection flow](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) (the example uses Pinecone, but the pattern in the same) would be: Question and Answer Chain (Retriever connector) -> Vector Store Retriever (Vector Store connector) -> Zep Vector Store. ### Use the Vector Store Question Answer Tool to answer questions Another pattern uses the [Vector Store Question Answer Tool](../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) to summarize results and answer questions from the Zep Vector Store node. Rather than connecting the Zep Vector Store directly as a tool, this pattern uses a tool specifically designed to summarizes data in the vector store. The [connections flow](https://n8n.io/workflows/2621-ai-agent-to-chat-with-files-in-supabase-storage/) (this example uses Supabase, but the pattern is the same) in this case would look like this: AI agent (tools connector) -> Vector Store Question Answer Tool (Vector Store connector) -> Zep Vector store. ## Node parameters ### Operation Mode This Vector Store node has four modes: **Get Many**, **Insert Documents**, **Retrieve Documents (As Vector Store for Chain/Tool)**, and **Retrieve Documents (As Tool for AI Agent)**. The mode you select determines the operations you can perform with the node and what inputs and outputs are available. #### Get Many In this mode, you can retrieve multiple documents from your vector database by providing a prompt. The prompt is embedded and used for similarity search. The node returns the documents that are most similar to the prompt with their similarity score. This is useful if you want to retrieve a list of similar documents and pass them to an agent as additional context. #### Insert Documents Use insert documents mode to insert new documents into your vector database. #### Retrieve Documents (as Vector Store for Chain/Tool) Use Retrieve Documents (As Vector Store for Chain/Tool) mode with a vector-store retriever to retrieve documents from a vector database and provide them to the retriever connected to a chain. In this mode you must connect the node to a retriever node or root node. #### Retrieve Documents (as Tool for AI Agent) Use Retrieve Documents (As Tool for AI Agent) mode to use the vector store as a tool resource when answering queries. When formulating responses, the agent uses the vector store when the vector store name and description match the question details. ### Rerank Results Enables [reranking](../../../../../glossary/#ai-reranking). If you enable this option, you must connect a reranking node to the vector store. That node will then rerank the results for queries. You can use this option with the `Get Many`, `Retrieve Documents (As Vector Store for Chain/Tool)` and `Retrieve Documents (As Tool for AI Agent)` modes. ### Insert Documents parameters - **Collection Name**: Enter the collection name to store the data in. ### Get Many parameters - **Collection Name**: Enter the collection name to retrieve the data from. - **Prompt**: Enter the search query. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ### Retrieve Documents (As Vector Store for Chain/Tool) parameters - **Collection Name**: Enter the collection name to retrieve the data from. ### Retrieve Documents (As Tool for AI Agent) parameters - **Name**: The name of the vector store. - **Description**: Explain to the LLM what this tool does. A good, specific description allows LLMs to produce expected results more often. - **Collection Name**: Enter the collection name to retrieve the data from. - **Limit**: Enter how many results to retrieve from the vector store. For example, set this to `10` to get the ten best results. ## Node options ### Embedding Dimensions Must be the same when embedding the data and when querying it. This sets the size of the array of floats used to represent the semantic meaning of a text document. ### Is Auto Embedded Available in the **Insert Documents** Operation Mode, enabled by default. Disable this to configure your embeddings in Zep instead of in n8n. ### Metadata Filter Available in **Get Many** mode. When searching for data, use this to match with metadata associated with the document. This is an `AND` query. If you specify more than one metadata filter field, all of them must match. When inserting data, the metadata is set using the document loader. Refer to [Default Data Loader](../../sub-nodes/n8n-nodes-langchain.documentdefaultdataloader/) for more information on loading documents. ## Templates and examples [Browse Zep Vector Store integration templates](https://n8n.io/integrations/zep-vector-store/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Zep documentation](https://js.langchain.com/docs/integrations/vectorstores/zep/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # AI Agent node An [AI agent](../../../../../glossary/#ai-agent) is an autonomous system that receives data, makes rational decisions, and acts within its environment to achieve specific goals. The AI agent's environment is everything the agent can access that isn't the agent itself. This agent uses external [tools](../../../../../glossary/#ai-tool) and APIs to perform actions and retrieve information. It can understand the capabilities of different tools and determine which tool to use depending on the task. Connect a tool You must connect at least one tool [sub-node](../../sub-nodes/) to an AI Agent node. Agent type Prior to version 1.82.0, the AI Agent had a setting for working as different agent types. This has now been removed and all AI Agent nodes work as a `Tools Agent` which was the recommended and most frequently used setting. If you're working with older versions of the AI Agent in workflows or templates, as long as they were set to 'Tools Agent', they should continue to behave as intended with the updated node. ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Angie, personal AI assistant with Telegram voice and text** by Derek Cheung [View template details](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) [Browse AI Agent integration templates](https://n8n.io/integrations/agent/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on agents](https://js.langchain.com/docs/concepts/agents/) for more information about the service. New to AI Agents? Read the [n8n blog introduction to AI agents](https://blog.n8n.io/ai-agents/). View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # AI Agent node common issues Here are some common errors and issues with the [AI Agent node](../) and steps to resolve or troubleshoot them. ## Internal error: 400 Invalid value for 'content' A full error message might look like this: ``` Internal error Error: 400 Invalid value for 'content': expected a string, got null. ``` This error can occur if the **Prompt** input contains a null value. You might see this in one of two scenarios: 1. When you've set the **Prompt** to **Define below** and have an expression in your **Text** that isn't generating a value. - To resolve, make sure your expressions reference valid fields and that they resolve to valid input rather than null. 1. When you've set the **Prompt** to **Connected Chat Trigger Node** and the incoming data has null values. - To resolve, remove any null values from the `chatInput` field of the input node. ## Error in sub-node Simple Memory This error displays when n8n runs into an issue with the [Simple Memory](../../../sub-nodes/n8n-nodes-langchain.memorybufferwindow/) sub-node. It most often occurs when your workflow or the workflow template you copied uses an older version of the Simple memory node (previously known as "Window Buffer Memory"). Try removing the Simple Memory node from your workflow and re-adding it, which will guarantee you're using the latest version of the node. ## A Chat Model sub-node must be connected error This error displays when n8n tries to execute the node without having a Chat Model connected. To resolve this, click the + Chat Model button at the bottom of your screen when the node is open, or click the Chat Model + connector when the node is closed. n8n will then open a selection of possible Chat Models to pick from. ## No prompt specified error This error occurs when the agent expects to get the prompt from the previous node automatically. Typically, this happens when you're using the [Chat Trigger Node](../../../../core-nodes/n8n-nodes-langchain.chattrigger/). To resolve this issue, find the **Prompt** parameter of the AI Agent node and change it from **Connected Chat Trigger Node** to **Define below**. This allows you to manually build your prompt by referencing output data from other nodes or by adding static text. # Conversational AI Agent node Feature removed n8n removed this functionality in February 2025. The Conversational Agent has human-like conversations. It can maintain context, understand user intent, and provide relevant answers. This agent is typically used for building chatbots, virtual assistants, and customer support systems. The Conversational Agent describes [tools](../../../../../../glossary/#ai-tool) in the system prompt and parses JSON responses for tool calls. If your preferred AI model doesn't support tool calling or you're handling simpler interactions, this agent is a good general option. It's more flexible but may be less accurate than the [Tools Agent](../tools-agent/). Refer to [AI Agent](../) for more information on the AI Agent node itself. You can use this agent with the [Chat Trigger](../../../../core-nodes/n8n-nodes-langchain.chattrigger/) node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions. ## Node parameters Configure the Conversational Agent using the following parameters. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Node options Refine the Conversational Agent node's behavior using these options: ### Human Message Tell the agent about the tools it can use and add context to the user's input. You must include these expressions and variable: - `{tools}`: A LangChain expression that provides a string of the tools you've connected to the Agent. Provide some context or explanation about who should use the tools and how they should use them. - `{format_instructions}`: A LangChain expression that provides the schema or format from the output parser node you've connected. Since the instructions themselves are context, you don't need to provide context for this expression. - `{{input}}`: A LangChain variable containing the user's prompt. This variable populates with the value of the **Prompt** parameter. Provide some context that this is the user's input. Here's an example of how you might use these strings: Example: ``` TOOLS ------ Assistant can ask the user to use tools to look up information that may be helpful in answering the user's original question. The tools the human can use are: {tools} {format_instructions} USER'S INPUT -------------------- Here is the user's input (remember to respond with a markdown code snippet of a JSON blob with a single action, and NOTHING else): {{input}} ``` ### System Message If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send. Use this option to guide the agent's decision-making. ### Max Iterations Enter the number of times the model should run to try and generate a good answer from the user's prompt. Defaults to `10`. ### Return Intermediate Steps Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off). This could be useful for further refining the agent's behavior based on the steps it took. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # OpenAI Functions Agent node Use the OpenAI Functions Agent node to use an [OpenAI functions model](https://platform.openai.com/docs/guides/function-calling). These are models that detect when a function should be called and respond with the inputs that should be passed to the function. Refer to [AI Agent](../) for more information on the AI Agent node itself. You can use this agent with the [Chat Trigger](../../../../core-nodes/n8n-nodes-langchain.chattrigger/) node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions. OpenAI Chat Model required You must use the [OpenAI Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatopenai/) with this agent. ## Node parameters Configure the OpenAI Functions Agent using the following parameters. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Node options Refine the OpenAI Functions Agent node's behavior using these options: ### System Message If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send. Use this option to guide the agent's decision-making. ### Max Iterations Enter the number of times the model should run to try and generate a good answer from the user's prompt. Defaults to `10`. ### Return Intermediate Steps Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off). This could be useful for further refining the agent's behavior based on the steps it took. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # Plan and Execute Agent node The Plan and Execute Agent is like the [ReAct agent](../react-agent/) but with a focus on planning. It first creates a high-level plan to solve the given task and then executes the plan step by step. This agent is most useful for tasks that require a structured approach and careful planning. Refer to [AI Agent](../) for more information on the AI Agent node itself. ## Node parameters Configure the Plan and Execute Agent using the following parameters. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Node options Refine the Plan and Execute Agent node's behavior using these options: ### Human Message Template Enter a message that n8n will send to the agent during each step execution. Available LangChain expressions: - `{previous_steps}`: Contains information about the previous steps the agent's already completed. - `{current_step}`: Contains information about the current step. - `{agent_scratchpad}`: Information to remember for the next iteration. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # ReAct AI Agent node Feature removed n8n removed this functionality in February 2025. The ReAct Agent node implements [ReAct](https://react-lm.github.io/) logic. ReAct (reasoning and acting) brings together the reasoning powers of chain-of-thought prompting and action plan generation. The ReAct Agent reasons about a given task, determines the necessary actions, and then executes them. It follows the cycle of reasoning and acting until it completes the task. The ReAct agent can break down complex tasks into smaller sub-tasks, prioritise them, and execute them one after the other. Refer to [AI Agent](../) for more information on the AI Agent node itself. No memory The ReAct agent doesn't support memory sub-nodes. This means it can't recall previous prompts or simulate an ongoing conversation. ## Node parameters Configure the ReAct Agent using the following parameters. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Node options Use the options to create a message to send to the agent at the start of the conversation. The message type depends on the model you're using: - **Chat models**: These models have the concept of three components interacting (AI, system, and human). They can receive system messages and human messages (prompts). - **Instruct models**: These models don't have the concept of separate AI, system, and human components. They receive one body of text, the instruct message. ### Human Message Template Use this option to extend the user prompt. This is a way for the agent to pass information from one iteration to the next. Available LangChain expressions: - `{input}`: Contains the user prompt. - `{agent_scratchpad}`: Information to remember for the next iteration. ### Prefix Message Enter text to prefix the tools list at the start of the conversation. You don't need to add the list of tools. LangChain automatically adds the tools list. ### Suffix Message for Chat Model Add text to append after the tools list at the start of the conversation when the agent uses a chat model. You don't need to add the list of tools. LangChain automatically adds the tools list. ### Suffix Message for Regular Model Add text to append after the tools list at the start of the conversation when the agent uses a regular/instruct model. You don't need to add the list of tools. LangChain automatically adds the tools list. ### Return Intermediate Steps Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off). This could be useful for further refining the agent's behavior based on the steps it took. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ## Related resources Refer to LangChain's [ReAct Agents](https://js.langchain.com/docs/concepts/agents/) documentation for more information. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # SQL AI Agent node Feature removed n8n removed this functionality in February 2025. The SQL Agent uses a SQL database as a data source. It can understand natural language questions, convert them into SQL queries, execute the queries, and present the results in a user-friendly format. This agent is valuable for building natural language interfaces to databases. Refer to [AI Agent](../) for more information on the AI Agent node itself. ## Node parameters Configure the SQL Agent using the following parameters. ### Data Source Choose the database to use as a data source for the node. Options include: - **MySQL**: Select this option to use a MySQL database. - Also select the **Credential for MySQL**. - **SQLite**: Select this option to use a SQLite database. - You must add a [Read/Write File From Disk](../../../../core-nodes/n8n-nodes-base.readwritefile/) node before the Agent to read your SQLite file. - Also enter the **Input Binary Field** name of your SQLite file coming from the Read/Write File From Disk node. - **Postgres**: Select this option to use a Postgres database. - Also select the **Credential for Postgres**. Postgres and MySQL Agents If you are using [Postgres](../../../../credentials/postgres/) or [MySQL](../../../../credentials/mysql/), this agent doesn't support the credential tunnel options. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ## Node options Refine the SQL Agent node's behavior using these options: ### Ignored Tables If you'd like the node to ignore any tables from the database, enter a comma-separated list of tables you'd like it to ignore. If left empty, the agent doesn't ignore any tables. ### Include Sample Rows Enter the number of sample rows to include in the prompt to the agent. Default is `3`. Sample rows help the agent understand the schema of the database, but they also increase the number of tokens used. ### Included Tables If you'd only like to include specific tables from the database, enter a comma-separated list of tables to include. If left empty, the agent includes all tables. ### Prefix Prompt Enter a message you'd like to send to the agent before the **Prompt** text. This initial message can provide more context and guidance to the agent about what it can and can't do, and how to format the response. n8n fills this field with an example. ### Suffix Prompt Enter a message you'd like to send to the agent after the **Prompt** text. Available LangChain expressions: - `{chatHistory}`: A history of messages in this conversation, useful for maintaining context. - `{input}`: Contains the user prompt. - `{agent_scratchpad}`: Information to remember for the next iteration. n8n fills this field with an example. ### Limit Enter the maximum number of results to return. Default is `10`. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # Tools AI Agent node The Tools Agent uses external [tools](../../../../../../glossary/#ai-tool) and APIs to perform actions and retrieve information. It can understand the capabilities of different tools and determine which tool to use depending on the task. This agent helps integrate LLMs with various external services and databases. This agent has an enhanced ability to work with tools and can ensure a standard output format. The Tools Agent implements [Langchain's tool calling](https://js.langchain.com/docs/concepts/tool_calling/) interface. This interface describes available tools and their schemas. The agent also has improved output parsing capabilities, as it passes the parser to the model as a formatting tool. Refer to [AI Agent](../) for more information on the AI Agent node itself. You can use this agent with the [Chat Trigger](../../../../core-nodes/n8n-nodes-langchain.chattrigger/) node. Attach a memory sub-node so that users can have an ongoing conversation with multiple queries. Memory doesn't persist between sessions. This agent supports the following chat models: - [OpenAI Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatopenai/) - [Groq Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatgroq/) - [Mistral Cloud Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatmistralcloud/) - [Anthropic Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatanthropic/) - [Azure OpenAI Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatazureopenai/) The Tools Agent can use the following tools... - [Call n8n Workflow](../../../sub-nodes/n8n-nodes-langchain.toolworkflow/) - [Code](../../../sub-nodes/n8n-nodes-langchain.toolcode/) - [HTTP Request](../../../sub-nodes/n8n-nodes-langchain.toolhttprequest/) - [Action Network](../../../../app-nodes/n8n-nodes-base.actionnetwork/) - [ActiveCampaign](../../../../app-nodes/n8n-nodes-base.activecampaign/) - [Affinity](../../../../app-nodes/n8n-nodes-base.affinity/) - [Agile CRM](../../../../app-nodes/n8n-nodes-base.agilecrm/) - [Airtable](../../../../app-nodes/n8n-nodes-base.airtable/) - [APITemplate.io](../../../../app-nodes/n8n-nodes-base.apitemplateio/) - [Asana](../../../../app-nodes/n8n-nodes-base.asana/) - [AWS Lambda](../../../../app-nodes/n8n-nodes-base.awslambda/) - [AWS S3](../../../../app-nodes/n8n-nodes-base.awss3/) - [AWS SES](../../../../app-nodes/n8n-nodes-base.awsses/) - [AWS Textract](../../../../app-nodes/n8n-nodes-base.awstextract/) - [AWS Transcribe](../../../../app-nodes/n8n-nodes-base.awstranscribe/) - [Baserow](../../../../app-nodes/n8n-nodes-base.baserow/) - [Bubble](../../../../app-nodes/n8n-nodes-base.bubble/) - [Calculator](../../../sub-nodes/n8n-nodes-langchain.toolcalculator/) - [ClickUp](../../../../app-nodes/n8n-nodes-base.clickup/) - [CoinGecko](../../../../app-nodes/n8n-nodes-base.coingecko/) - [Compression](../../../../core-nodes/n8n-nodes-base.compression/) - [Crypto](../../../../core-nodes/n8n-nodes-base.crypto/) - [DeepL](../../../../app-nodes/n8n-nodes-base.deepl/) - [DHL](../../../../app-nodes/n8n-nodes-base.dhl/) - [Discord](../../../../app-nodes/n8n-nodes-base.discord/) - [Dropbox](../../../../app-nodes/n8n-nodes-base.dropbox/) - [Elasticsearch](../../../../app-nodes/n8n-nodes-base.elasticsearch/) - [ERPNext](../../../../app-nodes/n8n-nodes-base.erpnext/) - [Facebook Graph API](../../../../app-nodes/n8n-nodes-base.facebookgraphapi/) - [FileMaker](../../../../app-nodes/n8n-nodes-base.filemaker/) - [Ghost](../../../../app-nodes/n8n-nodes-base.ghost/) - [Git](../../../../core-nodes/n8n-nodes-base.git/) - [GitHub](../../../../app-nodes/n8n-nodes-base.github/) - [GitLab](../../../../app-nodes/n8n-nodes-base.gitlab/) - [Gmail](../../../../app-nodes/n8n-nodes-base.gmail/) - [Google Analytics](../../../../app-nodes/n8n-nodes-base.googleanalytics/) - [Google BigQuery](../../../../app-nodes/n8n-nodes-base.googlebigquery/) - [Google Calendar](../../../../app-nodes/n8n-nodes-base.googlecalendar/) - [Google Chat](../../../../app-nodes/n8n-nodes-base.googlechat/) - [Google Cloud Firestore](../../../../app-nodes/n8n-nodes-base.googlecloudfirestore/) - [Google Cloud Realtime Database](../../../../app-nodes/n8n-nodes-base.googlecloudrealtimedatabase/) - [Google Contacts](../../../../app-nodes/n8n-nodes-base.googlecontacts/) - [Google Docs](../../../../app-nodes/n8n-nodes-base.googledocs/) - [Google Drive](../../../../app-nodes/n8n-nodes-base.googledrive/) - [Google Sheets](../../../../app-nodes/n8n-nodes-base.googlesheets/) - [Google Slides](../../../../app-nodes/n8n-nodes-base.googleslides/) - [Google Tasks](../../../../app-nodes/n8n-nodes-base.googletasks/) - [Google Translate](../../../../app-nodes/n8n-nodes-base.googletranslate/) - [Google Workspace Admin](../../../../app-nodes/n8n-nodes-base.gsuiteadmin/) - [Gotify](../../../../app-nodes/n8n-nodes-base.gotify/) - [Grafana](../../../../app-nodes/n8n-nodes-base.grafana/) - [GraphQL](../../../../core-nodes/n8n-nodes-base.graphql/) - [Hacker News](../../../../app-nodes/n8n-nodes-base.hackernews/) - [Home Assistant](../../../../app-nodes/n8n-nodes-base.homeassistant/) - [HubSpot](../../../../app-nodes/n8n-nodes-base.hubspot/) - [Jenkins](../../../../app-nodes/n8n-nodes-base.jenkins/) - [Jira Software](../../../../app-nodes/n8n-nodes-base.jira/) - [JWT](../../../../core-nodes/n8n-nodes-base.jwt/) - [Kafka](../../../../app-nodes/n8n-nodes-base.kafka/) - [LDAP](../../../../core-nodes/n8n-nodes-base.ldap/) - [Line](../../../../app-nodes/n8n-nodes-base.line/) - [LinkedIn](../../../../app-nodes/n8n-nodes-base.linkedin/) - [Mailcheck](../../../../app-nodes/n8n-nodes-base.mailcheck/) - [Mailgun](../../../../app-nodes/n8n-nodes-base.mailgun/) - [Mattermost](../../../../app-nodes/n8n-nodes-base.mattermost/) - [Mautic](../../../../app-nodes/n8n-nodes-base.mautic/) - [Medium](../../../../app-nodes/n8n-nodes-base.medium/) - [Microsoft Excel 365](../../../../app-nodes/n8n-nodes-base.microsoftexcel/) - [Microsoft OneDrive](../../../../app-nodes/n8n-nodes-base.microsoftonedrive/) - [Microsoft Outlook](../../../../app-nodes/n8n-nodes-base.microsoftoutlook/) - [Microsoft SQL](../../../../app-nodes/n8n-nodes-base.microsoftsql/) - [Microsoft Teams](../../../../app-nodes/n8n-nodes-base.microsoftteams/) - [Microsoft To Do](../../../../app-nodes/n8n-nodes-base.microsofttodo/) - [Monday.com](../../../../app-nodes/n8n-nodes-base.mondaycom/) - [MongoDB](../../../../app-nodes/n8n-nodes-base.mongodb/) - [MQTT](../../../../app-nodes/n8n-nodes-base.mqtt/) - [MySQL](../../../../app-nodes/n8n-nodes-base.mysql/) - [NASA](../../../../app-nodes/n8n-nodes-base.nasa/) - [Nextcloud](../../../../app-nodes/n8n-nodes-base.nextcloud/) - [NocoDB](../../../../app-nodes/n8n-nodes-base.nocodb/) - [Notion](../../../../app-nodes/n8n-nodes-base.notion/) - [Odoo](../../../../app-nodes/n8n-nodes-base.odoo/) - [OpenWeatherMap](../../../../app-nodes/n8n-nodes-base.openweathermap/) - [Pipedrive](../../../../app-nodes/n8n-nodes-base.pipedrive/) - [Postgres](../../../../app-nodes/n8n-nodes-base.postgres/) - [Pushover](../../../../app-nodes/n8n-nodes-base.pushover/) - [QuickBooks Online](../../../../app-nodes/n8n-nodes-base.quickbooks/) - [QuickChart](../../../../app-nodes/n8n-nodes-base.quickchart/) - [RabbitMQ](../../../../app-nodes/n8n-nodes-base.rabbitmq/) - [Reddit](../../../../app-nodes/n8n-nodes-base.reddit/) - [Redis](../../../../app-nodes/n8n-nodes-base.redis/) - [RocketChat](../../../../app-nodes/n8n-nodes-base.rocketchat/) - [S3](../../../../app-nodes/n8n-nodes-base.s3/) - [Salesforce](../../../../app-nodes/n8n-nodes-base.salesforce/) - [Send Email](../../../../core-nodes/n8n-nodes-base.sendemail/) - [SendGrid](../../../../app-nodes/n8n-nodes-base.sendgrid/) - [SerpApi (Google Search)](../../../sub-nodes/n8n-nodes-langchain.toolserpapi/) - [Shopify](../../../../app-nodes/n8n-nodes-base.shopify/) - [Slack](../../../../app-nodes/n8n-nodes-base.slack/) - [Spotify](../../../../app-nodes/n8n-nodes-base.spotify/) - [Stripe](../../../../app-nodes/n8n-nodes-base.stripe/) - [Supabase](../../../../app-nodes/n8n-nodes-base.supabase/) - [Telegram](../../../../app-nodes/n8n-nodes-base.telegram/) - [Todoist](../../../../app-nodes/n8n-nodes-base.todoist/) - [TOTP](../../../../core-nodes/n8n-nodes-base.totp/) - [Trello](../../../../app-nodes/n8n-nodes-base.trello/) - [Twilio](../../../../app-nodes/n8n-nodes-base.twilio/) - [urlscan.io](../../../../app-nodes/n8n-nodes-base.urlscanio/) - [Vector Store](../../../sub-nodes/n8n-nodes-langchain.toolvectorstore/) - [Webflow](../../../../app-nodes/n8n-nodes-base.webflow/) - [Wikipedia](../../../sub-nodes/n8n-nodes-langchain.toolwikipedia/) - [Wolfram|Alpha](../../../sub-nodes/n8n-nodes-langchain.toolwolframalpha/) - [WooCommerce](../../../../app-nodes/n8n-nodes-base.woocommerce/) - [Wordpress](../../../../app-nodes/n8n-nodes-base.wordpress/) - [X (Formerly Twitter)](../../../../app-nodes/n8n-nodes-base.twitter/) - [YouTube](../../../../app-nodes/n8n-nodes-base.youtube/) - [Zendesk](../../../../app-nodes/n8n-nodes-base.zendesk/) - [Zoho CRM](../../../../app-nodes/n8n-nodes-base.zohocrm/) - [Zoom](../../../../app-nodes/n8n-nodes-base.zoom/) ## Node parameters Configure the Tools Agent using the following parameters. ### Prompt Select how you want the node to construct the prompt (also known as the user's query or input from the chat). Choose from: - **Take from previous node automatically**: If you select this option, the node expects an input from a previous node called `chatInput`. - **Define below**: If you select this option, provide either static text or an expression for dynamic content to serve as the prompt in the **Prompt (User Message)** field. ### Require Specific Output Format This parameter controls whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of these output parsers to the node: - [Auto-fixing Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserautofixing/) - [Item List Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparseritemlist/) - [Structured Output Parser](../../../sub-nodes/n8n-nodes-langchain.outputparserstructured/) ## Node options Refine the Tools Agent node's behavior using these options: ### System Message If you'd like to send a message to the agent before the conversation starts, enter the message you'd like to send. Use this option to guide the agent's decision-making. ### Max Iterations Enter the number of times the model should run to try and generate a good answer from the user's prompt. Defaults to `10`. ### Return Intermediate Steps Select whether to include intermediate steps the agent took in the final output (turned on) or not (turned off). This could be useful for further refining the agent's behavior based on the steps it took. ### Tracing Metadata Add custom key-value metadata to tracing events for this agent. This is useful for filtering and debugging runs in tracing tools like [LangSmith](../../../../../../advanced-ai/langchain/langsmith/). Entries with empty keys or values are ignored. ### Automatically Passthrough Binary Images Use this option to control whether binary images should be automatically passed through to the agent as image type messages (turned on) or not (turned off). ### Enable Streaming When enabled, the AI Agent sends data back to the user in real-time as it generates the answer. This is useful for long-running generations. This is enabled by default. Streaming requirements For streaming to work, your workflow must use a trigger that supports streaming responses, such as the [Chat Trigger](../../../../core-nodes/n8n-nodes-langchain.chattrigger/) or [Webhook](../../../../core-nodes/n8n-nodes-base.webhook/) node with **Response Mode** set to **Streaming**. ## Templates and examples Refer to the main AI Agent node's [Templates and examples](../#templates-and-examples) section. ## Dynamic parameters for tools with `$fromAI()` To learn how to dynamically populate parameters for app node tools, refer to [Let AI specify tool parameters with `$fromAI()`](../../../../../../advanced-ai/examples/using-the-fromai-function/). ## Human review for tool calls You can require human approval before the AI Agent executes specific tools. This is useful for tools that perform sensitive actions like sending messages, modifying records, or deleting data. To add a human review step: 1. Click the tool connector on the AI Agent node. 1. In the Tools Panel, find the **Human review** section. 1. Select your preferred approval channel (Chat, Slack, Telegram, and more) and configure it. 1. Connect the tools that require approval to the human review step. When the AI wants to use a gated tool, the workflow pauses and sends an approval request through your chosen channel. The recipient can approve (tool executes) or deny (action canceled). For detailed setup instructions and best practices, refer to [Human-in-the-loop for AI tool calls](../../../../../../advanced-ai/human-in-the-loop-tools/). ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](../common-issues/). # Question and Answer Chain node Use the Question and Answer Chain node to use a [vector store](../../../../../glossary/#ai-vector-store) as a retriever. On this page, you'll find the node parameters for the Question and Answer Chain node, and links to more resources. ## Node parameters ### Query The question you want to ask. ## Templates and examples **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow** by Derek Cheung [View template details](https://n8n.io/workflows/2183-ai-crew-to-automate-fundamental-stock-analysis-qanda-workflow/) **Advanced AI Demo (Presented at AI Developers #14 meetup)** by Max Tkacz [View template details](https://n8n.io/workflows/2358-advanced-ai-demo-presented-at-ai-developers-14-meetup/) [Browse Question and Answer Chain integration templates](https://n8n.io/integrations/retrieval-qanda-chain/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on retrieval chains](https://js.langchain.com/docs/tutorials/rag/) for examples of how LangChain can use a vector store as a retriever. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common errors or issues and suggested resolution steps, refer to [Common Issues](common-issues/). # Question and Answer Chain node common issues Here are some common errors and issues with the [Question and Answer Chain node](../) and steps to resolve or troubleshoot them. ## No prompt specified error This error displays when the **Prompt** is empty or invalid. You might see this in one of two scenarios: 1. When you've set the **Prompt** to **Define below** and have an expression in your **Text** that isn't generating a value. - To resolve, enter a valid prompt in the **Text** field. - Make sure any expressions reference valid fields and that they resolve to valid input rather than null. 1. When you've set the **Prompt** to **Connected Chat Trigger Node** and the incoming data has null values. - To resolve, make sure your input contains a `chatInput` field. Add an [Edit Fields (Set)](../../../../core-nodes/n8n-nodes-base.set/) node to edit an incoming field name to `chatInput`. - Remove any null values from the `chatInput` field of the input node. ## A Retriever sub-node must be connected error This error displays when n8n tries to execute the node without having a Retriever connected. To resolve this, click the + Retriever button at the bottom of your screen when the node is open, or click the Retriever + connector when the node isn't open. n8n will then open a selection of possible Retrievers to pick from. ## Can't produce longer responses If you need to generate longer responses than the Question and Answer Chain node produces by default, you can try one or more of the following techniques: - **Connect a more verbose model**: Some AI models produce more terse results than others. Swapping your model for one with a larger context window and more verbose output can increase the word length of your responses. - **Increase the maximum number of tokens**: Many model nodes (for example the [OpenAI Chat Model](../../../sub-nodes/n8n-nodes-langchain.lmchatopenai/#maximum-number-of-tokens)) include a **Maximum Number of Tokens** option. You can set this to increase the maximum number of tokens the model can use to produce a response. - **Build larger responses in stages**: For more detailed answers, you may want to construct replies in stages using a variety of AI nodes. You can use AI split up a single question into multiple prompts and create responses for each. You can then compose a final reply by combining the responses again. Though the details are different, you can find a good example of the general idea in this [template for writing a WordPress post with AI](https://n8n.io/workflows/2187-write-a-wordpress-post-with-ai-starting-from-a-few-keywords/). # Microsoft Agent 365 Trigger node Early preview This is an early preview for building agents with Microsoft Agent 365 and n8n. You need to be part of the [Frontier preview program](https://adoption.microsoft.com/copilot/frontier-program/) to get early access to Microsoft Agent 365. Use the Microsoft Agent 365 Trigger node to receive messages from Microsoft Agent 365 and respond with AI-powered agent capabilities. This node allows n8n to act as the backend for your Agent 365 agents. Credentials You can find authentication information for this node [here](../../../credentials/microsoftagent365/). ## Node connectors The Microsoft Agent 365 Trigger node can connect to the following sub-nodes: - **Model**: Connect a language model (Chat model sub-node) to process incoming messages - **Memory**: Connect a memory sub-node to maintain conversation context. A single n8n workflow powers multiple Agent instances on the Microsoft side, so multiple users will interact with the same workflow. Choose your session ID key carefully to scope conversations to individual Agent instances and prevent conversation history from bleeding between them. - **Tool**: Connect tool sub-nodes to give your agent additional capabilities ## Node options ### Enable Microsoft Work IQ Tools for A365 Toggle this option to give your agent access to Microsoft 365 tools through the Model Context Protocol (MCP). Default: Off. When enabled, select one of: - **All**: Enable all available Microsoft MCP tools - **Selected**: Choose specific tools from the list: - Calendar - Mail - SharePoint - Teams - Word - and more ## Getting started We recommend following these resources to set up your Agent 365 integration: 1. [Microsoft Agent 365 developer documentation](https://learn.microsoft.com/en-us/microsoft-agent-365/developer/): Official documentation for building agents with Microsoft Agent 365 1. [Agent 365 CLI Documentation](https://learn.microsoft.com/en-us/microsoft-agent-365/developer/agent-365-cli): Cross-platform command-line tool for deploying and managing Agent 365 applications on Azure ## Related resources Refer to [Microsoft Agent 365 developer documentation](https://learn.microsoft.com/en-us/microsoft-agent-365/developer/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Sub nodes Sub nodes attach to root nodes within a group of cluster nodes. They configure the overall functionality of the cluster. [Cluster nodes](../../../../glossary/#cluster-node-n8n) are node groups that work together to provide functionality in an n8n workflow. Instead of using a single node, you use a [root node](../../../../glossary/#root-node-n8n) and one or more [sub-nodes](../../../../glossary/#sub-node-n8n) that extend the functionality of the node. # Default Data Loader node Use the Default Data Loader node to load binary data files or JSON data for [vector stores](../../../../../glossary/#ai-vector-store) or summarization. On this page, you'll find a list of parameters the Default Data Loader node supports, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Text Splitting**: Choose from: - **Simple**: Uses the [Recursive Character Text Splitter](../n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/) with a chunk size of 1000 and an overlap of 200. - **Custom**: Allows you to connect a text splitter of your choice. - **Type of Data**: Select **Binary** or **JSON**. - **Mode**: Choose from: - **Load All Input Data**: Use all the node's input data. - **Load Specific Data**: Use [expressions](../../../../../data/expressions/) to define the data you want to load. You can add text as well as expressions. This means you can create a custom document from a mix of text and expressions. - **Data Format**: Displays when you set **Type of Data** to **Binary**. Select the file MIME type for your binary data. Set to **Automatically Detect by MIME Type** if you want n8n to set the data format for you. If you set a specific data format and the incoming file MIME type doesn't match it, the node errors. If you use **Automatically Detect by MIME Type**, the node falls back to text format if it can't match the file MIME type to a supported data format. ## Node options - **Metadata**: Set the metadata that should accompany the document in the vector store. This is what you match to using the **Metadata Filter** option when retrieving data using the vector store nodes. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Scrape and summarize webpages with AI** by n8n Team [View template details](https://n8n.io/workflows/1951-scrape-and-summarize-webpages-with-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) [Browse Default Data Loader integration templates](https://n8n.io/integrations/default-data-loader/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on document loaders](https://js.langchain.com/docs/modules/data_connection/document_loaders/integrations/file_loaders/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # GitHub Document Loader node Deprecated This node is deprecated, and will be removed in a future version. Use the GitHub Document Loader node to load data from a GitHub repository for [vector stores](../../../../../glossary/#ai-vector-store) or summarization. On this page, you'll find the node parameters for the GitHub Document Loader node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/github/). This node doesn't support OAuth for authentication. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Text Splitting**: Choose from: - **Simple**: Uses the [Recursive Character Text Splitter](../n8n-nodes-langchain.textsplitterrecursivecharactertextsplitter/) with a chunk size of 1000 and an overlap of 200. - **Custom**: Allows you to connect a text splitter of your choice. - **Repository Link**: Enter the URL of your GitHub repository. - **Branch**: Enter the branch name to use. ## Node options - **Recursive**: Select whether to include sub-folders and files (turned on) or not (turned off). - **Ignore Paths**: Enter directories to ignore. ## Templates and examples [Browse GitHub Document Loader integration templates](https://n8n.io/integrations/github-document-loader/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on document loaders](https://js.langchain.com/docs/modules/data_connection/document_loaders/integrations/file_loaders/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings AWS Bedrock node Use the Embeddings AWS Bedrock node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings AWS Bedrock node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/aws/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Learn more about available models in the [Amazon Bedrock documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/what-is-bedrock.html). ## Templates and examples [Browse Embeddings AWS Bedrock integration templates](https://n8n.io/integrations/embeddings-aws-bedrock/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's AWS Bedrock embeddings documentation](https://js.langchain.com/docs/integrations/platforms/aws/#text-embedding-models) and the [AWS Bedrock documentation](https://docs.aws.amazon.com/bedrock/) for more information about AWS Bedrock. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Azure OpenAI node Use the Embeddings Azure OpenAI node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Azure OpenAI node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/azureopenai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node options - **Model (Deployment) Name**: Select the model (deployment) to use for generating embeddings. - **Batch Size**: Enter the maximum number of documents to send in each request. - **Strip New Lines**: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default. - **Timeout**: Enter the maximum amount of time a request can take in seconds. Set to `-1` for no timeout. ## Templates and examples **Auto-Update Knowledge Base with Drive, LlamaIndex & Azure OpenAI Embeddings** by Khairul Muhtadin [View template details](https://n8n.io/workflows/9174-auto-update-knowledge-base-with-drive-llamaindex-and-azure-openai-embeddings/) **Ask questions about past meetings using voice with OpenAI and Pinecone** by Rahul Joshi [View template details](https://n8n.io/workflows/12757-ask-questions-about-past-meetings-using-voice-with-openai-and-pinecone/) **PDF RAG Agent with Telegram Chat & Auto-Ingestion from Google Drive** by Meelioo [View template details](https://n8n.io/workflows/8860-pdf-rag-agent-with-telegram-chat-and-auto-ingestion-from-google-drive/) [Browse Embeddings Azure OpenAI integration templates](https://n8n.io/integrations/embeddings-azure-openai/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's OpenAI embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/azure_openai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Cohere node Use the Embeddings Cohere node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Cohere node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/cohere/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Choose from: - **Embed-English-v2.0(4096 Dimensions)** - **Embed-English-Light-v2.0(1024 Dimensions)** - **Embed-Multilingual-v2.0(768 Dimensions)** Learn more about available models in [Cohere's models documentation](https://docs.cohere.com/docs/models). ## Templates and examples **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) **Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG** by Ezema Kingsley Chibuzo [View template details](https://n8n.io/workflows/5589-create-a-multi-modal-telegram-support-bot-with-gpt-4-and-supabase-rag/) **Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive** by Aitor | 1Node [View template details](https://n8n.io/workflows/3848-build-a-document-qa-system-with-rag-using-milvus-cohere-and-openai-for-google-drive/) [Browse Embeddings Cohere integration templates](https://n8n.io/integrations/embeddings-cohere/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's Cohere embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/cohere/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Google Gemini node Use the Embeddings Google Gemini node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Google Gemini node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/googleai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Learn more about available models in [Google Gemini's models documentation](https://ai.google.dev/models/gemini). ## Templates and examples **RAG Chatbot for Company Documents using Google Drive and Gemini** by Mihai Farcas [View template details](https://n8n.io/workflows/2753-rag-chatbot-for-company-documents-using-google-drive-and-gemini/) **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) **🤖 Create a Documentation Expert Bot with RAG, Gemini, and Supabase** by Lucas Peyrin [View template details](https://n8n.io/workflows/5993-create-a-documentation-expert-bot-with-rag-gemini-and-supabase/) [Browse Embeddings Google Gemini integration templates](https://n8n.io/integrations/embeddings-google-gemini/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's Google Generative AI embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/google_generativeai) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Google PaLM node Use the Embeddings Google PaLM node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Google PaLM node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/googleai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. n8n dynamically loads models from the Google PaLM API and you'll only see the models available to your account. ## Templates and examples **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) **RAG Chatbot for Company Documents using Google Drive and Gemini** by Mihai Farcas [View template details](https://n8n.io/workflows/2753-rag-chatbot-for-company-documents-using-google-drive-and-gemini/) [Browse Embeddings Google PaLM integration templates](https://n8n.io/integrations/embeddings-google-palm/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's Google PaLM embeddings documentation](https://js.langchain.com/v0.2/docs/integrations/text_embedding/google_palm/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Google Vertex node Use the Embeddings Google Vertex node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Google Vertex node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/google/service-account/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Learn more about available embedding models in [Google VertexAI embeddings API documentation](https://cloud.google.com/vertex-ai/generative-ai/docs/model-reference/text-embeddings-api). ## Templates and examples **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) **RAG Chatbot for Company Documents using Google Drive and Gemini** by Mihai Farcas [View template details](https://n8n.io/workflows/2753-rag-chatbot-for-company-documents-using-google-drive-and-gemini/) [Browse Embeddings Google Vertex integration templates](https://n8n.io/integrations/embeddings-google-vertex/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Google Generative AI embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/google_generativeai) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings HuggingFace Inference node Use the Embeddings HuggingFace Inference node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings HuggingFace Inference, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/huggingface/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Refer to the [Hugging Face models documentation](https://huggingface.co/models?other=embeddings) for available models. ## Node options - **Custom Inference Endpoint**: Enter the URL of your deployed model, hosted by HuggingFace. If you set this, n8n ignores the **Model Name**. Refer to [HuggingFace's guide to inference](https://huggingface.co/inference-endpoints) for more information. ## Templates and examples [Browse Embeddings HuggingFace Inference integration templates](https://n8n.io/integrations/embeddings-hugging-face-inference/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's HuggingFace Inference embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/hugging_face_inference/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Lemonade node Use the Embeddings Lemonade node to generate vector embeddings using models hosted and managed by a Lemonade server. This node is useful for workflows that perform semantic search, clustering, similarity matching, or any task that requires numerical vector representations of text. On this page, you'll find a list of operations the Embeddings Lemonade node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/lemonade/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters Configure the node with the following parameters. ### Model The model which will generate the embeddings. Models are loaded and managed through the Lemonade server configured for this node. Select the desired model from the list of available options served by your Lemonade instance. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) [Browse Embeddings Lemonade integration templates](https://n8n.io/integrations/embeddings-lemonade/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Lemonade Server's documentation](https://lemonade-server.ai/docs/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Mistral Cloud node Use the Embeddings Mistral Cloud node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Mistral Cloud node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/mistral/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Learn more about available models in [Mistral's models documentation](https://docs.mistral.ai/platform/pricing/). ## Node options - **Batch Size**: Enter the maximum number of documents to send in each request. - **Strip New Lines**: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default. ## Templates and examples **Breakdown Documents into Study Notes using Templating MistralAI and Qdrant** by Jimleuk [View template details](https://n8n.io/workflows/2339-breakdown-documents-into-study-notes-using-templating-mistralai-and-qdrant/) **Build a Financial Documents Assistant using Qdrant and Mistral.ai** by Jimleuk [View template details](https://n8n.io/workflows/2335-build-a-financial-documents-assistant-using-qdrant-and-mistralai/) **Build a Tax Code Assistant with Qdrant, Mistral.ai and OpenAI** by Jimleuk [View template details](https://n8n.io/workflows/2341-build-a-tax-code-assistant-with-qdrant-mistralai-and-openai/) [Browse Embeddings Mistral Cloud integration templates](https://n8n.io/integrations/embeddings-mistral-cloud/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's Mistral embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/mistralai) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings Ollama node Use the Embeddings Ollama node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings Ollama node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/ollama/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the embedding. Choose from: - [all-minilm](https://ollama.com/library/all-minilm) (384 Dimensions) - [nomic-embed-text](https://ollama.com/library/nomic-embed-text) (768 Dimensions) Learn more about available models in [Ollama's models documentation](https://ollama.ai/library). ## Templates and examples **Local Chatbot with Retrieval Augmented Generation (RAG)** by Thomas Janssen [View template details](https://n8n.io/workflows/5148-local-chatbot-with-retrieval-augmented-generation-rag/) **Bitrix24 AI-Powered RAG Chatbot for Open Line Channels** by Ferenc Erb [View template details](https://n8n.io/workflows/3094-bitrix24-ai-powered-rag-chatbot-for-open-line-channels/) **Chat with Your Email History using Telegram, Mistral and Pgvector for RAG** by Alfonso Corretti [View template details](https://n8n.io/workflows/3763-chat-with-your-email-history-using-telegram-mistral-and-pgvector-for-rag/) [Browse Embeddings Ollama integration templates](https://n8n.io/integrations/embeddings-ollama/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Langchain's Ollama embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/ollama/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Embeddings OpenAI node Use the Embeddings OpenAI node to generate [embeddings](../../../../../glossary/#ai-embedding) for a given text. On this page, you'll find the node parameters for the Embeddings OpenAI node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/openai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node options - **Model**: Select the model to use for generating embeddings. - **Base URL**: Enter the URL to send the request to. Use this if you are using a self-hosted OpenAI-like model. - **Batch Size**: Enter the maximum number of documents to send in each request. - **Strip New Lines**: Select whether to remove new line characters from input text (turned on) or not (turned off). n8n enables this by default. - **Timeout**: Enter the maximum amount of time a request can take in seconds. Set to `-1` for no timeout. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **Chat with PDF docs using AI (quoting sources)** by David Roberts [View template details](https://n8n.io/workflows/2165-chat-with-pdf-docs-using-ai-quoting-sources/) [Browse Embeddings OpenAI integration templates](https://n8n.io/integrations/embeddings-openai/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's OpenAI embeddings documentation](https://js.langchain.com/docs/integrations/text_embedding/openai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Alibaba Cloud chat model node The Alibaba Cloud Chat Model node sends chat prompts to Alibaba Cloud's conversational models, for advanced AI chains and LangChain integrations. Use it to generate conversational responses, integrate model outputs into workflows, or run prompts with custom sampling, retry, and timeout settings. Credentials You can find authentication information for this node [here](../../../credentials/alibaba/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Operations ### Generate chat response Generate a chat-style response from the selected Alibaba Cloud model. **Parameters** - **Model** (type: *options*, field: `model`): The model that generates the completion. Learn more about available models on Alibaba Cloud: [Alibaba Cloud Model Studio — Models](https://www.alibabacloud.com/help/en/model-studio/getting-started/models). **Options** - **Frequency Penalty** (type: *number*, field: `frequencyPenalty`): Positive values penalize new tokens based on how often they appear so far, decreasing the model's likelihood to repeat the same line verbatim. Default: `0`. - **Maximum Number of Tokens** (type: *number*, field: `maxTokens`): The maximum number of tokens to generate in the completion. The limit depends on the selected model. A value of minus one uses the model's default limit. Default: `-1`. - **Response Format** (type: *options*, field: `responseFormat`): The output format returned by the node, for example plain text or structured formats. Default: text. - **Presence Penalty** (type: *number*, field: `presencePenalty`): Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to discuss new topics. Default: `0`. - **Sampling Temperature** (type: *number*, field: `temperature`): Control randomness. Lower values make output less random, near zero is deterministic. Default: `0.7`. - **Timeout** (type: *number*, field: `timeout`): Maximum time (in milliseconds) allowed for a request before it's aborted. Default: `360000`. - **Max Retries** (type: *number*, field: `maxRetries`): Maximum number of retry attempts for failed requests. Default: `2`. - **Top P** (type: *number*, field: `topP`): Nucleus sampling parameter that controls diversity. 0.5 means half of the probability mass is considered. Adjust **Top P** or **Sampling Temperature**, but not both. Default: `1`. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Respond to WhatsApp Messages with AI Like a Pro!** by Jimleuk [View template details](https://n8n.io/workflows/2466-respond-to-whatsapp-messages-with-ai-like-a-pro/) **AI-Powered WhatsApp Chatbot 🤖📲 for Text, Voice, Images & PDFs with memory 🧠** by Davide Boizza [View template details](https://n8n.io/workflows/3586-ai-powered-whatsapp-chatbot-for-text-voice-images-and-pdfs-with-memory/) [Browse Alibaba Cloud Chat Model integration templates](https://n8n.io/integrations/alibaba-cloud-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Alibaba Cloud Model Studio — Models](https://www.alibabacloud.com/help/en/model-studio/getting-started/models) for more information about available models and their capabilities. # Anthropic Chat Model node Use the Anthropic Chat Model node to use Anthropic's Claude family of chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the Anthropic Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/anthropic/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model that generates the completion. Choose from: - **Claude** - **Claude Instant** Learn more in the [Anthropic model documentation](https://docs.anthropic.com/claude/reference/selecting-a-model). ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **Notion AI Assistant Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2415-notion-ai-assistant-generator/) **Gmail AI Email Manager** by Max Mitcham [View template details](https://n8n.io/workflows/4722-gmail-ai-email-manager/) **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) [Browse Anthropic Chat Model integration templates](https://n8n.io/integrations/anthropic-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Anthropic documentation](https://js.langchain.com/docs/integrations/chat/anthropic/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # AWS Bedrock Chat Model node The AWS Bedrock Chat Model node allows you use LLM models utilising AWS Bedrock platform. On this page, you'll find the node parameters for the AWS Bedrock Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/aws/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model that generates the completion. Learn more about available models in the [Amazon Bedrock model documentation](https://docs.aws.amazon.com/bedrock/latest/userguide/models-supported.html). ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ## Proxy limitations This node doesn't support the [`NO_PROXY` environment variable](../../../../../hosting/configuration/environment-variables/deployment/). ## Templates and examples **💅 AI Agents Generate Content & Automate Posting for Beauty Salon Social Media 📲** by N8ner [View template details](https://n8n.io/workflows/12834-ai-agents-generate-content-and-automate-posting-for-beauty-salon-social-media/) **💾 Generate Blog Posts on Autopilot with GPT‑5, Tavily and WordPress** by N8ner [View template details](https://n8n.io/workflows/12858-generate-blog-posts-on-autopilot-with-gpt5-tavily-and-wordpress/) **Create a Business Model Canvas and infographic image with Gemini** by Ryosuke Mori [View template details](https://n8n.io/workflows/12833-create-a-business-model-canvas-and-infographic-image-with-gemini/) [Browse AWS Bedrock Chat Model integration templates](https://n8n.io/integrations/aws-bedrock-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's AWS Bedrock Chat Model documentation](https://js.langchain.com/docs/integrations/chat/bedrock/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Azure OpenAI Chat Model node Use the Azure OpenAI Chat Model node to use OpenAI's chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the Azure OpenAI Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/azureopenai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the completion. ## Node options - **Frequency Penalty**: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Response Format**: Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON. - **Presence Penalty**: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Timeout**: Enter the maximum request time in milliseconds. - **Max Retries**: Enter the maximum number of times to retry a request. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Proxy limitations This node doesn't support the [`NO_PROXY` environment variable](../../../../../hosting/configuration/environment-variables/deployment/). ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Build Your Own Counseling Chatbot on LINE to Support Mental Health Conversations** by lin@davoy.tech [View template details](https://n8n.io/workflows/2975-build-your-own-counseling-chatbot-on-line-to-support-mental-health-conversations/) **CallForge - 05 - Gong.io Call Analysis with Azure AI & CRM Sync** by Angel Menendez [View template details](https://n8n.io/workflows/3035-callforge-05-gongio-call-analysis-with-azure-ai-and-crm-sync/) [Browse Azure OpenAI Chat Model integration templates](https://n8n.io/integrations/azure-openai-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Azure OpenAI documentation](https://js.langchain.com/docs/integrations/chat/azure) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Cohere Chat Model node Use the Cohere Chat Model node to access Cohere's large language models for conversational AI and text generation tasks. On this page, you'll find the node parameters for the Cohere Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/cohere/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model which will generate the completion. n8n dynamically loads available models from the Cohere API. Learn more in the [Cohere model documentation](https://docs.cohere.com/v2/docs/models#command). ## Node options - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Max Retries**: Enter the maximum number of times to retry a request. ## Templates and examples **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) **Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG** by Ezema Kingsley Chibuzo [View template details](https://n8n.io/workflows/5589-create-a-multi-modal-telegram-support-bot-with-gpt-4-and-supabase-rag/) **Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive** by Aitor | 1Node [View template details](https://n8n.io/workflows/3848-build-a-document-qa-system-with-rag-using-milvus-cohere-and-openai-for-google-drive/) [Browse Cohere Chat Model integration templates](https://n8n.io/integrations/cohere-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Cohere's API documentation](https://docs.cohere.com/v2/reference/about) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # DeepSeek Chat Model node Use the DeepSeek Chat Model node to use DeepSeek's chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the DeepSeek Chat Model node and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/deepseek/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model Select the model to use to generate the completion. n8n dynamically loads models from DeepSeek and you'll only see the models available to your account. ## Node options Use these options to further refine the node's behavior. ### Base URL Enter a URL here to override the default URL for the API. ### Frequency Penalty Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. ### Maximum Number of Tokens Enter the maximum number of tokens used, which sets the completion length. ### Response Format Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON. ### Presence Penalty Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. ### Sampling Temperature Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ### Timeout Enter the maximum request time in milliseconds. ### Max Retries Enter the maximum number of times to retry a request. ### Top P Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **🐋🤖 DeepSeek AI Agent + Telegram + LONG TERM Memory 🧠** by Joseph LePage [View template details](https://n8n.io/workflows/2864-deepseek-ai-agent-telegram-long-term-memory/) **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **AI Research Assistant via Telegram (GPT-4o mini + DeepSeek R1 + SerpAPI)** by Arlin Perez [View template details](https://n8n.io/workflows/5924-ai-research-assistant-via-telegram-gpt-4o-mini-deepseek-r1-serpapi/) [Browse DeepSeek Chat Model integration templates](https://n8n.io/integrations/deepseek-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources As DeepSeek is API-compatible with OpenAI, you can refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Google Gemini Chat Model node Use the Google Gemini Chat Model node to use Google's Gemini chat models with conversational agents. On this page, you'll find the node parameters for the Google Gemini Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/googleai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the completion. n8n dynamically loads models from the Google Gemini API and you'll only see the models available to your account. ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. - **Safety Settings**: Gemini supports adjustable safety settings. Refer to Google's [Gemini API safety settings](https://ai.google.dev/docs/safety_setting_gemini) for information on the available filters and levels. ## Limitations ### No proxy support The Google Gemini Chat Model node uses Google's SDK, which doesn't support proxy configuration. If you need to proxy your connection, as a work around, you can set up a dedicated reverse proxy for Gemini requests and change the **Host** parameter in your [Google Gemini credentials](../../../credentials/googleai/) to point to your proxy address: ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) **Build Your First AI Agent** by Lucas Peyrin [View template details](https://n8n.io/workflows/6270-build-your-first-ai-agent/) [Browse Google Gemini Chat Model integration templates](https://n8n.io/integrations/google-gemini-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Google Gemini documentation](https://js.langchain.com/docs/integrations/chat/google_generativeai) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Google Vertex Chat Model node Use the Google Vertex AI Chat Model node to use Google's Vertex AI chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the Google Vertex AI Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/google/service-account/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Project ID**: Select the project ID from your Google Cloud account to use. n8n dynamically loads projects from the Google Cloud account, but you can also enter it manually. - **Model Name**: Select the name of the model to use to generate the completion, for example `gemini-1.5-flash-001`, `gemini-1.5-pro-001`, etc. Refer to [Google models](https://cloud.google.com/vertex-ai/generative-ai/docs/learn/models) for a list of available models. ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Thinking Budget**: Controls reasoning tokens for thinking models. Set to `0` to disable automatic thinking. Set to `-1` for dynamic thinking. Leave empty for auto mode. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. - **Safety Settings**: Gemini supports adjustable safety settings. Refer to Google's [Gemini API safety settings](https://ai.google.dev/docs/safety_setting_gemini) for information on the available filters and levels. ## Templates and examples **Extract text from PDF and image using Vertex AI (Gemini) into CSV** by Keith Rumjahn [View template details](https://n8n.io/workflows/2614-extract-text-from-pdf-and-image-using-vertex-ai-gemini-into-csv/) **Automated Stale User Re-Engagement System with Supabase, Google Sheets & Gmail** by iamvaar [View template details](https://n8n.io/workflows/5603-automated-stale-user-re-engagement-system-with-supabase-google-sheets-and-gmail/) **Create Structured Notion Workspaces from Notes & Voice Using Gemini & GPT** by Alex Huy [View template details](https://n8n.io/workflows/7972-create-structured-notion-workspaces-from-notes-and-voice-using-gemini-and-gpt/) [Browse Google Vertex Chat Model integration templates](https://n8n.io/integrations/google-vertex-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Google Vertex AI documentation](https://js.langchain.com/docs/integrations/chat/google_vertex_ai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Groq Chat Model node Use the Groq Chat Model node to access Groq's large language models for conversational AI and text generation tasks. On this page, you'll find the node parameters for the Groq Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/groq/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model which will generate the completion. n8n dynamically loads available models from the Groq API. Learn more in the [Groq model documentation](https://console.groq.com/docs/models). ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ## Templates and examples **Conversational Interviews with AI Agents and n8n Forms** by Jimleuk [View template details](https://n8n.io/workflows/2566-conversational-interviews-with-ai-agents-and-n8n-forms/) **Telegram chat with PDF** by felipe biava cataneo [View template details](https://n8n.io/workflows/2392-telegram-chat-with-pdf/) **Build an AI-Powered Tech Radar Advisor with SQL DB, RAG, and Routing Agents** by Sean Lon [View template details](https://n8n.io/workflows/3151-build-an-ai-powered-tech-radar-advisor-with-sql-db-rag-and-routing-agents/) [Browse Groq Chat Model integration templates](https://n8n.io/integrations/groq-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Groq's API documentation](https://console.groq.com/docs/quickstart) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Lemonade Chat Model node Use the Lemonade Chat Model node to run chat-capable language models managed by a Lemonade server from within n8n. This node functions as a LangChain-compatible chat model root node and is suitable for chat-style workloads. It lets you select a model hosted on your Lemonade server, and control generation behavior using common sampling and decoding options. On this page, you'll find a list of the node parameters, and available options to refine generation. Credentials You can find authentication information for this node [here](../../../credentials/lemonade/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model The model which will generate the completion. Models are loaded and managed through the Lemonade server. This parameter is required. Select the model name made available by your Lemonade server (for example, a model alias like "gpt-4", or any custom model name exposed by Lemonade). Models are provided by the Lemonade server; if you don't see the model you expect, verify your Lemonade server configuration and credentials. ## Node options Use these options to further refine the node's behavior. ### Sampling Temperature Controls the randomness of the generated text. Lower values make the output more focused and deterministic, while higher values make it more diverse and random. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0.7 | ### Top P Controls which words the model can choose from when generating text. Lower values progressively remove the least likely options, so the model can only pick from a smaller, higher-confidence pool. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 1 | ### Frequency Penalty Adjusts the penalty for tokens that have already appeared in the generated text. Positive values discourage repetition, negative values encourage it. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0 | ### Presence Penalty Adjusts the penalty for tokens based on their presence in the generated text so far. Positive values penalize tokens that have already appeared, encouraging diversity. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0 | ### Max Tokens to Generate The maximum number of tokens to generate. Set to -1 for no limit. Be cautious when setting this to a large value, as it can lead to long outputs. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | -1 | ### Stop Sequences Comma-separated list of sequences where the model will stop generating text. Use this to define explicit termination strings for responses. | Property | Value | | -------- | ------ | | Type | string | | Required | no | | Default | "" | ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Angie, personal AI assistant with Telegram voice and text** by Derek Cheung [View template details](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) [Browse Lemonade Chat Model integration templates](https://n8n.io/integrations/lemonade-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Lemonade Server's documentation](https://lemonade-server.ai/docs/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # MiniMax Chat Model node Use the MiniMax Chat Model node to use MiniMax's chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the MiniMax Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/minimax/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model that generates the completion. Refer to [MiniMax's model documentation](https://platform.minimax.io/docs/guides/models-intro) for the available models. ## Node options - **Hide Thinking**: When turned on (default), the node strips `` tags from the model's response. Turn this off to include the model's reasoning in the output. - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Timeout**: Enter the maximum request time in milliseconds. - **Max Retries**: Enter the maximum number of times to retry a request. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples [Browse MiniMax Chat Model integration templates](https://n8n.io/integrations/minimax-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [MiniMax's documentation](https://platform.minimax.io/docs/guides/models-intro) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Mistral Cloud Chat Model node Use the Mistral Cloud Chat Model node to combine Mistral Cloud's chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the Mistral Cloud Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/mistral/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the completion. n8n dynamically loads models from Mistral Cloud and you'll only see the models available to your account. ## Node options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Timeout**: Enter the maximum request time in milliseconds. - **Max Retries**: Enter the maximum number of times to retry a request. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. - **Enable Safe Mode**: Enable safe mode by injecting a safety prompt at the beginning of the completion. This helps prevent the model from generating offensive content. - **Random Seed**: Enter a seed to use for random sampling. If set, different calls will generate deterministic results. ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Breakdown Documents into Study Notes using Templating MistralAI and Qdrant** by Jimleuk [View template details](https://n8n.io/workflows/2339-breakdown-documents-into-study-notes-using-templating-mistralai-and-qdrant/) **Build a Financial Documents Assistant using Qdrant and Mistral.ai** by Jimleuk [View template details](https://n8n.io/workflows/2335-build-a-financial-documents-assistant-using-qdrant-and-mistralai/) [Browse Mistral Cloud Chat Model integration templates](https://n8n.io/integrations/mistral-cloud-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Mistral documentation](https://js.langchain.com/docs/integrations/chat/mistral) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Moonshot Kimi Chat Model node Use the Moonshot Kimi Chat Model node to send chat requests to the Kimi chat API and generate conversational responses. Use it when you need an AI chat model in a workflow. For example, you can power assistants, build multi-step AI chains, or produce model-driven content with tunable sampling and token settings. Credentials You can find authentication information for this node [here](../../../credentials/moonshot/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Operations ### Generate chat response Sends a chat request to the selected Kimi model and returns the model's response. **Parameters** - **Model** (type: options, field: `model`): The model that generates the completion. Default: `kimi-k2.5`. Learn more at [Moonshot Kimi Chat API docs](https://platform.kimi.ai/docs/api/chat). **Options** - **Frequency Penalty** (type: number, field: `frequencyPenalty`): Positive values penalize new tokens based on their existing frequency, so the model repeats less. Default: `0`. - **Maximum number of tokens** (type: number, field: `maxTokens`): The maximum number of tokens to generate in the completion. A value of -1 uses the model default. The token limit depends on the selected model. Default: `-1`. - **Response format** (type: options, field: `responseFormat`): Format of the model response. Default: `text`. - **Presence penalty** (type: number, field: `presencePenalty`): Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. Default: `0`. - **Sampling temperature** (type: number, field: `temperature`): Controls randomness. Lower values make outputs less random; near zero the model becomes more deterministic. Default: `0.7`. - **Timeout** (type: number, field: `timeout`): Maximum time a request can take, in milliseconds. Default: 360000 (six minutes). - **Max retries** (type: number, field: `maxRetries`): Maximum number of retries to attempt for failed requests. Default: two. - **Top P** (type: number, field: `topP`): Nucleus sampling parameter controlling diversity. A value of zero point five means the model considers half of the likelihood-weighted options. We recommend changing either **Top P** or **Sampling Temperature**, don't change both. Default: `1`. ## Templates and examples [Browse Moonshot Kimi Chat Model integration templates](https://n8n.io/integrations/moonshot-kimi-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Moonshot Kimi Chat Model's documentation](https://platform.kimi.ai/docs/api/chat) for more information about the service and available model options. # OpenRouter Chat Model node Use the OpenRouter Chat Model node to use OpenRouter's chat models with conversational agents. On this page, you'll find the node parameters for the OpenRouter Chat Model node and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/openrouter/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model Select the model to use to generate the completion. n8n dynamically loads models from OpenRouter and you'll only see the models available to your account. ## Node options Use these options to further refine the node's behavior. ### Frequency Penalty Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. ### Maximum Number of Tokens Enter the maximum number of tokens used, which sets the completion length. ### Response Format Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON. ### Presence Penalty Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. ### Sampling Temperature Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ### Timeout Enter the maximum request time in milliseconds. ### Max Retries Enter the maximum number of times to retry a request. ### Top P Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **Automate SEO-Optimized WordPress Posts with AI & Google Sheets** by Davide Boizza [View template details](https://n8n.io/workflows/3085-automate-seo-optimized-wordpress-posts-with-ai-and-google-sheets/) **Personal Life Manager with Telegram, Google Services & Voice-Enabled AI** by Derek Cheung [View template details](https://n8n.io/workflows/8237-personal-life-manager-with-telegram-google-services-and-voice-enabled-ai/) **Publish WordPress Posts to Social Media X, Facebook, LinkedIn, Instagram with AI** by Davide Boizza [View template details](https://n8n.io/workflows/3086-publish-wordpress-posts-to-social-media-x-facebook-linkedin-instagram-with-ai/) [Browse OpenRouter Chat Model integration templates](https://n8n.io/integrations/openrouter-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources As OpenRouter is API-compatible with OpenAI, you can refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Vercel AI Gateway Chat Model node Use the Vercel AI Gateway Chat Model node to use AI Gateway chat models with conversational agents. On this page, you'll find the node parameters for the Vercel AI Gateway Chat Model node and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/vercel/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model Select the model to use to generate the completion. n8n dynamically loads models from the AI Gateway and you'll only see the models available to your account. ## Node options Use these options to further refine the node's behavior. ### Frequency Penalty Use this option to control the chance of the model repeating itself. Higher values reduce the chance of the model repeating itself. ### Maximum Number of Tokens Enter the maximum number of tokens used, which sets the completion length. ### Response Format Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON. ### Presence Penalty Use this option to control the chance of the model talking about new topics. Higher values increase the chance of the model talking about new topics. ### Sampling Temperature Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ### Timeout Enter the maximum request time in milliseconds. ### Max Retries Enter the maximum number of times to retry a request. ### Top P Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples [Browse Vercel AI Gateway Chat Model integration templates](https://n8n.io/integrations/vercel-ai-gateway-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources As the Vercel AI Gateway is API-compatible with OpenAI, you can refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # xAI Grok Chat Model node Use the xAI Grok Chat Model node to access xAI Grok's large language models for conversational AI and text generation tasks. On this page, you'll find the node parameters for the xAI Grok Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/xai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model which will generate the completion. n8n dynamically loads available models from the xAI Grok API. Learn more in the [xAI Grok model documentation](https://docs.x.ai/docs/models). ## Node options - **Frequency Penalty**: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. Most models have a context length of 2048 tokens with the newest models supporting up to 32,768 tokens. - **Response Format**: Choose **Text** or **JSON**. **JSON** ensures the model returns valid JSON. - **Presence Penalty**: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Timeout**: Enter the maximum request time in milliseconds. - **Max Retries**: Enter the maximum number of times to retry a request. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **AI Chatbot Call Center: Demo Call Center (Production-Ready, Part 2)** by ChatPayLabs [View template details](https://n8n.io/workflows/4045-ai-chatbot-call-center-demo-call-center-production-ready-part-2/) **Homey Pro - Smarthouse integration with LLM** by Ole Andre Torjussen [View template details](https://n8n.io/workflows/4058-homey-pro-smarthouse-integration-with-llm/) [Browse xAI Grok Chat Model integration templates](https://n8n.io/integrations/xai-grok-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [xAI Grok's API documentation](https://docs.x.ai/docs/api-reference) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Cohere Model node Use the Cohere Model node to use Cohere's models. On this page, you'll find the node parameters for the Cohere Model node, and links to more resources. This node lacks tools support, so it won't work with the [AI Agent](../../root-nodes/n8n-nodes-langchain.agent/) node. Instead, connect it with the [Basic LLM Chain](../../root-nodes/n8n-nodes-langchain.chainllm/) node. Credentials You can find authentication information for this node [here](../../../credentials/cohere/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node Options - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ## Templates and examples **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) **Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG** by Ezema Kingsley Chibuzo [View template details](https://n8n.io/workflows/5589-create-a-multi-modal-telegram-support-bot-with-gpt-4-and-supabase-rag/) **Build a Document QA System with RAG using Milvus, Cohere, and OpenAI for Google Drive** by Aitor | 1Node [View template details](https://n8n.io/workflows/3848-build-a-document-qa-system-with-rag-using-milvus-cohere-and-openai-for-google-drive/) [Browse Cohere Model integration templates](https://n8n.io/integrations/cohere-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Cohere documentation](https://js.langchain.com/docs/integrations/llms/cohere/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Lemonade Model node Use the Lemonade Model node to generate text completions using language models hosted and managed by a Lemonade server. This node is a simple LangChain-compatible language model root node suitable for text completion tasks within n8n workflows. On this page, you'll find a list of operations the Lemonade Model node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/lemonade/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters Configure the node with the following parameters. ### Model The model which will generate the completion. Models are loaded and managed through the Lemonade server; select the model you want to use from the list provided by the node. ## Node options Use these options to further refine the node's behavior. ### Sampling Temperature Controls the randomness of the generated text. Lower values make the output more focused and deterministic, while higher values make it more diverse and random. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0.7 | ### Top P Controls which words the model can choose from when generating text. Lower values progressively remove the least likely options, so the model can only pick from a smaller, higher-confidence pool. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 1 | ### Frequency Penalty Adjusts the penalty for tokens that have already appeared in the generated text. Positive values discourage repetition, negative values encourage it. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0 | ### Presence Penalty Adjusts the penalty for tokens based on their presence in the generated text so far. Positive values penalize tokens that have already appeared, encouraging diversity. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | 0 | ### Max Tokens to Generate The maximum number of tokens to generate. Set to -1 for no limit. Be cautious when setting this to a large value, as it can lead to very long outputs. | Property | Value | | -------- | ------ | | Type | number | | Required | no | | Default | -1 | ### Stop Sequences Comma-separated list of sequences where the model will stop generating text. | Property | Value | | -------- | ------ | | Type | string | | Required | no | | Default | "" | ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Angie, personal AI assistant with Telegram voice and text** by Derek Cheung [View template details](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) [Browse Lemonade Model integration templates](https://n8n.io/integrations/lemonade-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Lemonade Server's documentation](https://lemonade-server.ai/docs/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Hugging Face Inference Model node Use the Hugging Face Inference Model node to use Hugging Face's models. On this page, you'll find the node parameters for the Hugging Face Inference Model node, and links to more resources. This node lacks tools support, so it won't work with the [AI Agent](../../root-nodes/n8n-nodes-langchain.agent/) node. Instead, connect it with the [Basic LLM Chain](../../root-nodes/n8n-nodes-langchain.chainllm/) node. Credentials You can find authentication information for this node [here](../../../credentials/huggingface/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model to use to generate the completion. ## Node options - **Custom Inference Endpoint**: Enter a custom inference endpoint URL. - **Frequency Penalty**: Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. - **Maximum Number of Tokens**: Enter the maximum number of tokens used, which sets the completion length. - **Presence Penalty**: Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **Use an open-source LLM (via HuggingFace)** by n8n Team [View template details](https://n8n.io/workflows/1980-use-an-open-source-llm-via-huggingface/) **🤖 AI content generation for Auto Service 🚘 Automate your social media📲!** by N8ner [View template details](https://n8n.io/workflows/4600-ai-content-generation-for-auto-service-automate-your-social-media/) **Reduce LLM Costs with Semantic Caching using Redis Vector Store and HuggingFace** by Tihomir Mateev [View template details](https://n8n.io/workflows/10887-reduce-llm-costs-with-semantic-caching-using-redis-vector-store-and-huggingface/) [Browse Hugging Face Inference Model integration templates](https://n8n.io/integrations/hugging-face-inference-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Hugging Face Inference Model documentation](https://js.langchain.com/docs/integrations/llms/huggingface_inference/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Chat Memory Manager node The Chat Memory Manager node manages chat message [memories](../../../../../glossary/#ai-memory) within your workflows. Use this node to load, insert, and delete chat messages in an in-memory [vector store](../../../../../glossary/#ai-vector-store). This node is useful when you: - Can't add a memory node directly. - Need to do more complex memory management, beyond what the memory nodes offer. For example, you can add this node to check the memory size of the Agent node's response, and reduce it if needed. - Want to inject messages to the AI that look like user messages, to give the AI more context. On this page, you'll find a list of operations that the Chat Memory Manager node supports, along with links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Operation Mode**: Choose between **Get Many Messages**, **Insert Messages**, and **Delete Messages** operations. - **Insert Mode**: Available in **Insert Messages** mode. Choose from: - **Insert Messages**: Insert messages alongside existing messages. - **Override All Messages**: Replace current memory. - **Delete Mode**: available in **Delete Messages** mode. Choose from: - **Last N**: Delete the last N messages. - **All Messages**: Delete messages from memory. - **Chat Messages**: available in **Insert Messages** mode. Define the chat messages to insert into the memory, including: - **Type Name or ID**: Set the message type. Select one of: - **AI**: Use this for messages from the AI. - **System**: Add a message containing instructions for the AI. - **User**: Use this for messages from the user. This message type is sometimes called the 'human' message in other AI tools and guides. - **Message**: Enter the message contents. - **Hide Message in Chat**: Select whether n8n should display the message to the user in the chat UI (turned off) or not (turned on). - **Messages Count**: Available in **Delete Messages** mode when you select **Last N**. Enter the number of latest messages to delete. - **Simplify Output**: Available in **Get Many Messages** mode. Turn on to simplify the output to include only the sender (AI, user, or system) and the text. ## Templates and examples **Chat with OpenAI Assistant (by adding a memory)** by David Roberts [View template details](https://n8n.io/workflows/2098-chat-with-openai-assistant-by-adding-a-memory/) **Personal Life Manager with Telegram, Google Services & Voice-Enabled AI** by Derek Cheung [View template details](https://n8n.io/workflows/8237-personal-life-manager-with-telegram-google-services-and-voice-enabled-ai/) **AI Voice Chat using Webhook, Memory Manager, OpenAI, Google Gemini & ElevenLabs** by Ayoub [View template details](https://n8n.io/workflows/2405-ai-voice-chat-using-webhook-memory-manager-openai-google-gemini-and-elevenlabs/) [Browse Chat Memory Manager integration templates](https://n8n.io/integrations/chat-memory-manager/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Memory documentation](https://langchain-ai.github.io/langgraphjs/concepts/memory/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # MongoDB Chat Memory node Use the MongoDB Chat Memory node to use MongoDB as a [memory](../../../../../glossary/#ai-memory) server for storing chat history. On this page, you'll find a list of operations the MongoDB Chat Memory node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/mongodb/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Session Key**: Enter the key to use to store the memory in the workflow data. - **Collection Name**: Enter the name of the collection to store the chat history in. The system will create the collection if it doesn't exist. - **Database Name**: Enter the name of the database to store the chat history in. If not provided, the database from credentials will be used. - **Context Window Length**: Enter the number of previous interactions to consider for context. ## Related resources Refer to [LangChain's MongoDB Chat Message History documentation](https://js.langchain.com/docs/integrations/memory/mongodb) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one MongoDB Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Motorhead node Deprecated The Motorhead project is no longer maintained. This node is deprecated, and will be removed in a future version. Use the Motorhead node to use Motorhead as a [memory](../../../../../glossary/#ai-memory) server. On this page, you'll find a list of operations the Motorhead node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/motorhead/). ## Node parameters - **Session ID**: Enter the ID to use to store the memory in the workflow data. ## Node reference Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples [Browse Motorhead integration templates](https://n8n.io/integrations/motorhead/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Motorhead documentation](https://js.langchain.com/docs/integrations/memory/motorhead_memory) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one Motorhead node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Postgres Chat Memory node Use the Postgres Chat Memory node to use Postgres as a [memory](../../../../../glossary/#ai-memory) server for storing chat history. On this page, you'll find a list of operations the Postgres Chat Memory node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/postgres/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Session Key**: Enter the key to use to store the memory in the workflow data. - **Table Name**: Enter the name of the table to store the chat history in. The system will create the table if doesn't exist. - **Context Window Length**: Enter the number of previous interactions to consider for context. ## Related resources Refer to [LangChain's Postgres Chat Message History documentation](https://js.langchain.com/docs/integrations/memory/postgres) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one Postgres Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Redis Chat Memory node Use the Redis Chat Memory node to use Redis as a [memory](../../../../../glossary/#ai-memory) server. On this page, you'll find a list of operations the Redis Chat Memory node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/redis/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Session Key**: Enter the key to use to store the memory in the workflow data. - **Session Time To Live**: Use this parameter to make the session expire after a given number of seconds. - **Context Window Length**: Enter the number of previous interactions to consider for context. ## Templates and examples **Build your own N8N Workflows MCP Server** by Jimleuk [View template details](https://n8n.io/workflows/3770-build-your-own-n8n-workflows-mcp-server/) **Conversational Interviews with AI Agents and n8n Forms** by Jimleuk [View template details](https://n8n.io/workflows/2566-conversational-interviews-with-ai-agents-and-n8n-forms/) **Telegram AI Bot-to-Human Handoff for Sales Calls** by Jimleuk [View template details](https://n8n.io/workflows/3350-telegram-ai-bot-to-human-handoff-for-sales-calls/) [Browse Redis Chat Memory integration templates](https://n8n.io/integrations/redis-chat-memory/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Redis Chat Memory documentation](https://js.langchain.com/docs/integrations/memory/redis) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one Redis Chat Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Xata node Use the Xata node to use Xata as a [memory](../../../../../glossary/#ai-memory) server. On this page, you'll find a list of operations the Xata node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/xata/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Session ID**: Enter the ID to use to store the memory in the workflow data. - **Context Window Length**: Enter the number of previous interactions to consider for context. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Scrape and summarize webpages with AI** by n8n Team [View template details](https://n8n.io/workflows/1951-scrape-and-summarize-webpages-with-ai/) **Pulling data from services that n8n doesn’t have a pre-built integration for** by Jonathan [View template details](https://n8n.io/workflows/1748-pulling-data-from-services-that-n8n-doesnt-have-a-pre-built-integration-for/) [Browse Xata integration templates](https://n8n.io/integrations/xata/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Xata documentation](https://js.langchain.com/docs/integrations/memory/xata) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one Xata node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Zep node Deprecated This node is deprecated, and will be removed in a future version. Use the Zep node to use Zep as a [memory](../../../../../glossary/#ai-memory) server. On this page, you'll find a list of operations the Zep node supports, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/zep/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Session ID**: Enter the ID to use to store the memory in the workflow data. ## Templates and examples **Build a Personalized Shopping Assistant with Zep Memory, GPT-4 and Google Sheets** by InfyOm Technologies [View template details](https://n8n.io/workflows/7363-build-a-personalized-shopping-assistant-with-zep-memory-gpt-4-and-google-sheets/) **Send multi-stage customer project emails from Google Sheets with Zoho ZeptoMail** by Panth1823 [View template details](https://n8n.io/workflows/14994-send-multi-stage-customer-project-emails-from-google-sheets-with-zoho-zeptomail/) **Send personalized cold email follow-ups from Google Sheets with ZeptoMail** by Panth1823 [View template details](https://n8n.io/workflows/14813-send-personalized-cold-email-follow-ups-from-google-sheets-with-zeptomail/) [Browse Zep integration templates](https://n8n.io/integrations/zep/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Zep documentation](https://js.langchain.com/docs/integrations/memory/zep_memory) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Single memory instance If you add more than one Zep node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. # Model Selector The Model Selector node dynamically selects one of the connected language models during workflow execution based on a set of defined conditions. This enables implementing fallback mechanisms for error handling or choosing the optimal model for specific tasks. This page covers node parameters for the Model Selector node and includes links to related resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Number of Inputs Specifies the number of input connections available for attaching language models. ### Rules Each rule defines the model to use when specific conditions match. The Model Selector node evaluates rules sequentially, starting from the first input, and stops evaluation as soon as it finds a match. This means that if multiple rules would match, n8n will only use the model defined by the first matching rule. ## Templates and examples **AI Orchestrator: dynamically Selects Models Based on Input Type** by Davide Boizza [View template details](https://n8n.io/workflows/7004-ai-orchestrator-dynamically-selects-models-based-on-input-type/) **Analyze Google Ads search terms with AI and send wastage alerts** by SpaGreen Creative [View template details](https://n8n.io/workflows/14449-analyze-google-ads-search-terms-with-ai-and-send-wastage-alerts/) **Dynamic AI Model Selector with GDPR Compliance via Requesty and Google Sheets** by Stefan [View template details](https://n8n.io/workflows/5862-dynamic-ai-model-selector-with-gdpr-compliance-via-requesty-and-google-sheets/) [Browse Model Selector integration templates](https://n8n.io/integrations/model-selector/), or [search all templates](https://n8n.io/workflows/) ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Auto-fixing Output Parser node The Auto-fixing Output Parser node wraps another output parser. If the first one fails, it calls out to another LLM to fix any errors. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples **Notion AI Assistant Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2415-notion-ai-assistant-generator/) **Proxmox AI Agent with n8n and Generative AI Integration** by Amjid Ali [View template details](https://n8n.io/workflows/2749-proxmox-ai-agent-with-n8n-and-generative-ai-integration/) **Handling Appointment Leads and Follow-up With Twilio, Cal.com and AI** by Jimleuk [View template details](https://n8n.io/workflows/2342-handling-appointment-leads-and-follow-up-with-twilio-calcom-and-ai/) [Browse Auto-fixing Output Parser integration templates](https://n8n.io/integrations/auto-fixing-output-parser/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's output parser documentation](https://js.langchain.com/docs/concepts/output_parsers/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Item List Output Parser node Use the Item List Output Parser node to return a list of items with a specific length and separator. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node options - **Number of Items**: Enter the maximum items to return. Set to `-1` for unlimited items. - **Separator**: Select the separator used to split the results into separate items. Defaults to a new line. ## Templates and examples **Breakdown Documents into Study Notes using Templating MistralAI and Qdrant** by Jimleuk [View template details](https://n8n.io/workflows/2339-breakdown-documents-into-study-notes-using-templating-mistralai-and-qdrant/) **Automate Your RFP Process with OpenAI Assistants** by Jimleuk [View template details](https://n8n.io/workflows/2321-automate-your-rfp-process-with-openai-assistants/) **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) [Browse Item List Output Parser integration templates](https://n8n.io/integrations/item-list-output-parser/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's output parser documentation](https://js.langchain.com/docs/concepts/output_parsers) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Reranker Cohere The Reranker Cohere node allows you to [rerank](../../../../../glossary/#ai-reranking) the resulting chunks from a [vector store](../../../../../glossary/#ai-vector-store). You can connect this node to a vector store. The reranker reorders the list of documents retrieved from a vector store for a given query in order of descending relevance. On this page, you'll find the node parameters for the Reranker Cohere node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/cohere/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model Choose the reranking model to use. You can find out more about the available models in [Cohere's model documentation](https://docs.cohere.com/docs/models#rerank). ## Templates and examples **Automate sales cold calling pipeline with Apify, GPT-4o, and WhatsApp** by Khairul Muhtadin [View template details](https://n8n.io/workflows/5449-automate-sales-cold-calling-pipeline-with-apify-gpt-4o-and-whatsapp/) **Create a Multi-Modal Telegram Support Bot with GPT-4 and Supabase RAG** by Ezema Kingsley Chibuzo [View template details](https://n8n.io/workflows/5589-create-a-multi-modal-telegram-support-bot-with-gpt-4-and-supabase-rag/) **Chat with Google Drive documents using OpenAI and Pinecone RAG search** by Pinecone [View template details](https://n8n.io/workflows/11870-chat-with-google-drive-documents-using-openai-and-pinecone-rag-search/) [Browse Reranker Cohere integration templates](https://n8n.io/integrations/reranker-cohere/), or [search all templates](https://n8n.io/workflows/) ## Related resources View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Contextual Compression Retriever node The Contextual Compression Retriever node improves the answers returned from [vector store](../../../../../glossary/#ai-vector-store) document similarity searches by taking into account the context from the query. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples **Generate Contextual YouTube Comments Automatically with GPT-4o** by Yaron Been [View template details](https://n8n.io/workflows/4580-generate-contextual-youtube-comments-automatically-with-gpt-4o/) **Dynamic MCP Server Selection with OpenAI GPT-4.1 and Contextual AI Reranker** by Jinash Rouniyar [View template details](https://n8n.io/workflows/8272-dynamic-mcp-server-selection-with-openai-gpt-41-and-contextual-ai-reranker/) **Generate Contextual Recommendations from Slack using Pinecone** by Rahul Joshi [View template details](https://n8n.io/workflows/6018-generate-contextual-recommendations-from-slack-using-pinecone/) [Browse Contextual Compression Retriever integration templates](https://n8n.io/integrations/contextual-compression-retriever/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's contextual compression retriever documentation](https://js.langchain.com/docs/how_to/contextual_compression/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # MultiQuery Retriever node The MultiQuery Retriever node automates the process of prompt tuning by using an LLM to generate multiple queries from different perspectives for a given user input query. On this page, you'll find the node parameters for the MultiQuery Retriever node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node options - **Query Count**: Enter how many different versions of the query to generate. ## Templates and examples [Browse MultiQuery Retriever integration templates](https://n8n.io/integrations/multiquery-retriever/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's retriever conceptual documentation](https://js.langchain.com/docs/concepts/retrievers) and [LangChain's multiquery retriever API documentation](https://v03.api.js.langchain.com/classes/langchain.retrievers_multi_query.MultiQueryRetriever.html) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Vector Store Retriever node Use the Vector Store Retriever node to retrieve documents from a [vector store](../../../../../glossary/#ai-vector-store). On this page, you'll find the node parameters for the Vector Store Retriever node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Limit**: Enter the maximum number of results to return. ## Templates and examples **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) **AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow** by Derek Cheung [View template details](https://n8n.io/workflows/2183-ai-crew-to-automate-fundamental-stock-analysis-qanda-workflow/) **Advanced AI Demo (Presented at AI Developers #14 meetup)** by Max Tkacz [View template details](https://n8n.io/workflows/2358-advanced-ai-demo-presented-at-ai-developers-14-meetup/) [Browse Vector Store Retriever integration templates](https://n8n.io/integrations/vector-store-retriever/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's vector store retriever documentation](https://js.langchain.com/docs/how_to/vectorstore_retriever/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Workflow Retriever node Use the Workflow Retriever node to retrieve data from an n8n workflow for use in a Retrieval QA Chain or another Retriever node. On this page, you'll find the node parameters for the Workflow Retriever node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Source Tell n8n which workflow to call. You can choose either: - **Database** and enter a workflow ID. - **Parameter** and copy in a complete [workflow JSON](../../../../../workflows/export-import/). ### Workflow values Set values to pass to the workflow you're calling. These values appear in the output data of the trigger node in the workflow you call. You can access these values in expressions in the workflow. For example, if you have: - **Workflow Values** with a **Name** of `myCustomValue` - A workflow with an Execute Sub-workflow Trigger node as its trigger The expression to access the value of `myCustomValue` is `{{ $('Execute Sub-workflow Trigger').item.json.myCustomValue }}`. ## Templates and examples **AI Crew to Automate Fundamental Stock Analysis - Q&A Workflow** by Derek Cheung [View template details](https://n8n.io/workflows/2183-ai-crew-to-automate-fundamental-stock-analysis-qanda-workflow/) **Build a PDF Document RAG System with Mistral OCR, Qdrant and Gemini AI** by Davide Boizza [View template details](https://n8n.io/workflows/4400-build-a-pdf-document-rag-system-with-mistral-ocr-qdrant-and-gemini-ai/) **AI: Ask questions about any data source (using the n8n workflow retriever)** by n8n Team [View template details](https://n8n.io/workflows/1958-ai-ask-questions-about-any-data-source-using-the-n8n-workflow-retriever/) [Browse Workflow Retriever integration templates](https://n8n.io/integrations/workflow-retriever/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's general retriever documentation](https://js.langchain.com/docs/concepts/retrievers/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Character Text Splitter node Use the Character Text Splitter node to split document data based on characters. On this page, you'll find the node parameters for the Character Text Splitter node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Separator**: Select the separator used to split the document into separate items. - **Chunk Size**: Enter the number of characters in each chunk. - **Chunk Overlap**: Enter how much overlap to have between chunks. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Scrape and summarize webpages with AI** by n8n Team [View template details](https://n8n.io/workflows/1951-scrape-and-summarize-webpages-with-ai/) **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) [Browse Character Text Splitter integration templates](https://n8n.io/integrations/character-text-splitter/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's text splitter documentation](https://js.langchain.com/docs/concepts/text_splitters) and [LangChain's API documentation for character text splitting](https://v03.api.js.langchain.com/classes/langchain.text_splitter.CharacterTextSplitter.html) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Recursive Character Text Splitter node The Recursive Character Text Splitter node splits document data recursively to keep all paragraphs, sentences then words together as long as possible. On this page, you'll find the node parameters for the Recursive Character Text Splitter node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Chunk Size**: Enter the number of characters in each chunk. - **Chunk Overlap**: Enter how much overlap to have between chunks. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Scrape and summarize webpages with AI** by n8n Team [View template details](https://n8n.io/workflows/1951-scrape-and-summarize-webpages-with-ai/) **Ask questions about a PDF using AI** by David Roberts [View template details](https://n8n.io/workflows/1960-ask-questions-about-a-pdf-using-ai/) [Browse Recursive Character Text Splitter integration templates](https://n8n.io/integrations/recursive-character-text-splitter/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's text splitter documentation](https://js.langchain.com/docs/concepts/text_splitters) and [LangChain's recursively split by character documentation](https://v03.api.js.langchain.com/classes/langchain.text_splitter.RecursiveCharacterTextSplitter.html) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Token Splitter node The Token Splitter node splits a raw text string by first converting the text into BPE tokens, then splits these tokens into chunks and converts the tokens within a single chunk back into text. On this page, you'll find the node parameters for the Token Splitter node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Chunk Size**: Enter the number of characters in each chunk. - **Chunk Overlap**: Enter how much overlap to have between chunks. ## Templates and examples **🤖 AI Powered RAG Chatbot for Your Docs + Google Drive + Gemini + Qdrant** by Joseph LePage [View template details](https://n8n.io/workflows/2982-ai-powered-rag-chatbot-for-your-docs-google-drive-gemini-qdrant/) **AI Voice Chatbot with ElevenLabs & OpenAI for Customer Service and Restaurants** by Davide Boizza [View template details](https://n8n.io/workflows/2846-ai-voice-chatbot-with-elevenlabs-and-openai-for-customer-service-and-restaurants/) **Complete business WhatsApp AI-Powered RAG Chatbot using OpenAI** by Davide Boizza [View template details](https://n8n.io/workflows/2845-complete-business-whatsapp-ai-powered-rag-chatbot-using-openai/) [Browse Token Splitter integration templates](https://n8n.io/integrations/token-splitter/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's token documentation](https://js.langchain.com/docs/concepts/tokens/) and [LangChain's text splitter documentation](https://js.langchain.com/docs/concepts/text_splitters/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # AI Agent Tool node The AI Agent Tool node allows a root-level [agent](../../../../../glossary/#ai-agent) in your workflow to call other agents as tools to simplify multi-agent orchestration. The [primary agent](../../root-nodes/n8n-nodes-langchain.agent/tools-agent/) can supervise and delegate work to AI Agent Tool nodes that specialize in different tasks and knowledge. This allows you to use multiple agents in a single workflow without the complexity of managing context and variables that sub-workflows require. You can nest AI Agent Tool nodes into multiple layers for more complex multi-tiered use cases. On this page, you'll find the node parameters for the AI Agent Tool node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters Configure the AI Agent Tool node using these parameters: - **Description**: Give a description to the LLM of this agent's purpose and scope of responsibility. A good, specific description tells the parent agent when to delegate tasks to this agent for processing. - **Prompt (User Message)**: The prompt to the LLM explaining what actions to perform and what information to return. - **Require Specific Output Format**: Whether you want the node to require a specific output format. When turned on, n8n prompts you to connect one of the output parsers [described on the main agent page](../../root-nodes/n8n-nodes-langchain.agent/tools-agent/#require-specific-output-format). - **Enable Fallback Model**: Whether to enable a fallback model. When enabled, n8n prompts you to connect a backup chat model to use in case the primary model fails or isn't available. ## Node options Refine the AI Agent Tool node's behavior using these options: - **System Message**: A message to send to the agent before the conversation starts. - **Max Iterations**: The maximum number of times the model should run to generate a response before stopping. - **Return Intermediate Steps**: Whether to include intermediate steps the agent took in the final output. - **Automatically Passthrough Binary Images**: Whether binary images should be automatically passed through to the agent as image type messages. - **Batch Processing**: Whether to enable the following batch processing options for rate limiting: - **Batch Size**: The number of items to process in parallel. This helps with rate limiting but may impact the log output ordering. - **Delay Between Batches**: The number of milliseconds to wait between batches. ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **AI agent that can scrape webpages** by Eduard [View template details](https://n8n.io/workflows/2006-ai-agent-that-can-scrape-webpages/) [Browse AI Agent Tool integration templates](https://n8n.io/integrations/ai-agent-tool/), or [search all templates](https://n8n.io/workflows/) ## Dynamic parameters for tools with `$fromAI()` To learn how to dynamically populate parameters for app node tools, refer to [Let AI specify tool parameters with `$fromAI()`](../../../../../advanced-ai/examples/using-the-fromai-function/). # Calculator node The Calculator node is a [tool](../../../../../glossary/#ai-tool) that allows an [agent](../../../../../glossary/#ai-agent) to run mathematical calculations. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples **Build Your First AI Data Analyst Chatbot** by Solomon [View template details](https://n8n.io/workflows/3050-build-your-first-ai-data-analyst-chatbot/) **Chat with OpenAI Assistant (by adding a memory)** by David Roberts [View template details](https://n8n.io/workflows/2098-chat-with-openai-assistant-by-adding-a-memory/) **AI marketing report (Google Analytics & Ads, Meta Ads), sent via email/Telegram** by Friedemann Schuetz [View template details](https://n8n.io/workflows/2783-ai-marketing-report-google-analytics-and-ads-meta-ads-sent-via-emailtelegram/) [Browse Calculator integration templates](https://n8n.io/integrations/calculator/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Custom Code Tool node Use the Custom Code Tool node to write code that an [agent](../../../../../glossary/#ai-agent) can run. On this page, you'll find the node parameters for the Custom Code Tool node and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Description Give your custom code a description. This tells the agent when to use this tool. For example: > Call this tool to get a random color. The input should be a string with comma separated names of colors to exclude. ### Language You can use JavaScript or Python. ### JavaScript / Python box Write the code here. You can access the tool input using `query`. For example, to take the input string and lowercase it: ``` let myString = query; return myString.toLowerCase(); ``` ## Templates and examples **AI: Conversational agent with custom tool written in JavaScript** by n8n Team [View template details](https://n8n.io/workflows/1963-ai-conversational-agent-with-custom-tool-written-in-javascript/) **Custom LangChain agent written in JavaScript** by n8n Team [View template details](https://n8n.io/workflows/1955-custom-langchain-agent-written-in-javascript/) **OpenAI assistant with custom tools** by David Roberts [View template details](https://n8n.io/workflows/2025-openai-assistant-with-custom-tools/) [Browse Custom Code Tool integration templates](https://n8n.io/integrations/code-tool/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # HTTP Request Tool node Legacy tool version New instances of the HTTP Request tool node that you add to workflows use the standard [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) node as a tool. This page is describes the legacy, standalone HTTP Request tool node. You can identify which tool version is in your workflow by checking if the node has an **Add option** property when you open the node on the canvas. If that button is present, you're using the new version, not the one described on this page. The HTTP Request tool works just like the [HTTP Request](../../../core-nodes/n8n-nodes-base.httprequest/) node, but it's designed to be used with an [AI agent](../../../../../glossary/#ai-agent) as a tool to collect information from a website or API. On this page, you'll find a list of operations the HTTP Request node supports and links to more resources. Credentials Refer to [HTTP Request credentials](../../../credentials/httprequest/) for guidance on setting up authentication. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples [Browse HTTP Request Tool node documentation integration templates](https://n8n.io/integrations/http-request-tool/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # MCP Client Tool node The MCP Client Tool node is a [Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) client, allowing you to use the tools exposed by an external MCP server. You can connect the MCP Client Tool node to your models to call external tools with n8n agents. Credentials The MCP Client Tool node supports [Bearer](../../../credentials/httprequest/#using-bearer-auth), generic [header](../../../credentials/httprequest/#using-header-auth), and [OAuth2](../../../credentials/httprequest/#using-oauth2) authentication methods. ## Node parameters Configure the node with the following parameters. - **SSE Endpoint**: The SSE endpoint for the MCP server you want to connect to. - **Authentication**: The authentication method for authentication to your MCP server. The MCP tool supports [bearer](../../../credentials/httprequest/#using-bearer-auth), generic [header](../../../credentials/httprequest/#using-header-auth), and [OAuth2](../../../credentials/httprequest/#using-oauth2) authentication. Select **None** to attempt to connect without authentication. - **Tools to Include**: Choose which tools you want to expose to the AI Agent: - **All**: Expose all the tools given by the MCP server. - **Selected**: Activates a **Tools to Include** parameter where you can select the tools you want to expose to the AI Agent. - **All Except**: Activates a **Tools to Exclude** parameter where you can select the tools you want to avoid sharing with the AI Agent. The AI Agent will have access to all MCP server's tools that aren't selected. ## Templates and examples **Build an MCP Server with Google Calendar and Custom Functions** by Solomon [View template details](https://n8n.io/workflows/3514-build-an-mcp-server-with-google-calendar-and-custom-functions/) **Build your own N8N Workflows MCP Server** by Jimleuk [View template details](https://n8n.io/workflows/3770-build-your-own-n8n-workflows-mcp-server/) **Build a Personal Assistant with Google Gemini, Gmail and Calendar using MCP** by Aitor | 1Node [View template details](https://n8n.io/workflows/3905-build-a-personal-assistant-with-google-gemini-gmail-and-calendar-using-mcp/) [Browse MCP Client Tool integration templates](https://n8n.io/integrations/mcp-client-tool/), or [search all templates](https://n8n.io/workflows/) ## Related resources n8n also has an [MCP Server Trigger](../../../core-nodes/n8n-nodes-langchain.mcptrigger/) node that allows you to expose n8n tools to external AI Agents. Refer to the [MCP documentation](https://modelcontextprotocol.io/introduction) and [MCP specification](https://modelcontextprotocol.io/specification/) for more details about the protocol, servers, and clients. Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # SearXNG Tool node The SearXNG Tool node allows you to integrate search capabilities into your workflows using SearXNG. SearXNG aggregates results from multiple search engines without tracking you. On this page, you'll find the node options for the SearXNG Tool node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/searxng/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node Options - **Number of Results**: The number of results to retrieve. The default is 10. - **Page Number**: The page number of the search results to retrieve. The default is 1. - **Language**: A two-letter [language code](https://en.wikipedia.org/wiki/List_of_ISO_639_language_codes) to filter search results by language. For example: `en` for English, `fr` for French. The default is `en`. - **Safe Search**: Enables or disables filtering explicit content in the search results. Can be None, Moderate, or Strict. The default is None. ## Running a SearXNG instance This node requires running the SearXNG service on the same network as your n8n instance. Ensure your n8n instance has network access to the SearXNG service. This node requires results in JSON format, which isn't enabled in the default SearXNG configuration. To enable JSON output, add `json` to the `search.formats` section of your SearXNG instance's `settings.yml` file: ``` search: # options available for formats: [html, csv, json, rss] formats: - html - json ``` If the `formats` section isn't there, add it. The exact location of the `settings.yml` file depends on how you installed SearXNG. You can find more by visiting the [SearXNG configuration documentation](https://docs.searxng.org/admin/installation-searxng.html#configuration). The quality and availability of search results depend on the configuration and health of the SearXNG instance you use. ## Templates and examples [Browse SearXNG Tool integration templates](https://n8n.io/integrations/searxng/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [SearXNG's documentation](https://docs.searxng.org/) for more information about the service. You can also view [LangChain's documentation on their SearXNG integration](https://python.langchain.com/docs/integrations/tools/searx_search/). View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # SerpApi (Google Search) node Deprecated This node is deprecated, and will be removed in a future version. Use the verified **SerpApi Official** community node instead. The SerpAPI node allows an [agent](../../../../../glossary/#ai-agent) in your workflow to call Google's Search API. On this page, you'll find the node parameters for the SerpAPI node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/serp/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node options - **Country**: Enter the country code you'd like to use. Refer to [Google GL Parameter: Supported Google Countries](https://serpapi.com/google-countries) for supported countries and country codes. - **Device**: Select the device to use to get the search results. - **Explicit Array**: Choose whether to force SerpApi to fetch the Google results even if a cached version is already present (turned on) or not (turned off). - **Google Domain**: Enter the Google Domain to use. Refer to [Supported Google Domains](https://serpapi.com/google-domains) for supported domains. - **Language**: Enter the language code you'd like to use. Refer to [Google HL Parameter: Supported Google Languages](https://serpapi.com/google-languages) for supported languages and language codes. ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI chatbot that can search the web** by n8n Team [View template details](https://n8n.io/workflows/1959-ai-chatbot-that-can-search-the-web/) [Browse SerpApi (Google Search) integration templates](https://n8n.io/integrations/serpapi/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Serp's documentation](https://serpapi.com/search-api) for more information about the service. You can also view [LangChain's documentation on their Serp integration](https://js.langchain.com/docs/integrations/tools/serpapi/). View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Think Tool node The Think Tool node is a [tool](../../../../../glossary/#ai-tool) that invites an [agent](../../../../../glossary/#ai-agent) to do some thinking before answering. Essentially, it allows the agent to reflect on the question by talking to itself before providing an answer. In particular, this is useful for complex queries that require a thoughtful response. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **Generate & Auto-post AI Videos to Social Media with Veo3 and Blotato** by Dr. Firas [View template details](https://n8n.io/workflows/5035-generate-and-auto-post-ai-videos-to-social-media-with-veo3-and-blotato/) **Automated Stock Analysis Reports with Technical & News Sentiment using GPT-4o** by Elay Guez [View template details](https://n8n.io/workflows/3790-automated-stock-analysis-reports-with-technical-and-news-sentiment-using-gpt-4o/) [Browse Think Tool integration templates](https://n8n.io/integrations/think-tool/), or [search all templates](https://n8n.io/workflows/) ### Prompting example As an example, say you ask the agent: > *Jack is looking at Anne. Anne is looking at George. Jack is married, George is not, and we don't know if Anne is married. Is a married person looking at an unmarried person?* The agent will do its thinking with the tool: > *We have the following information: 1. Jack is looking at Anne. 2. Anne is looking at George. 3. Jack is married. 4. George is not married. 5. We don't know if Anne is married or not.* > > *We need to determine if a married person is looking at an unmarried person. Let's consider both cases for Anne:* > > *Case 1: Anne is married.* > > - *If Anne is married, then she (married) is looking at George (unmarried). In this case, a married person (Anne) is looking at an unmarried person (George).* > > *Case 2: Anne is unmarried.* > > - *If Anne is unmarried, then Jack (married) is looking at Anne (unmarried). In this case, a married person (Jack) is looking at an unmarried person (Anne).* > > *In both scenarios, we find that a married person is looking at an unmarried person. Therefore, the answer to the question is yes.* and then afterward provide the answer: > *Yes, a married person is looking at an unmarried person. In both possible scenarios regarding Anne's marital status, we find that a married individual (either Jack or Anne) is looking at an unmarried individual (either George or Anne).* ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Vector Store Question Answer Tool node The Vector Store Question Answer node is a [tool](../../../../../glossary/#ai-tool) that allows an [agent](../../../../../glossary/#ai-agent) to summarize results and answer questions based on chunks from a [vector store](../../../../../glossary/#ai-vector-store). On this page, you'll find the node parameters for the Vector Store Question Answer node, and links to more resources. Examples and templates For usage examples and templates to help you get started, refer to n8n's [Vector Store Question Answer Tool integrations](https://n8n.io/integrations/vector-store-tool/) page. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Description of Data Enter a description of the data in the vector store. ### Limit The maximum number of results to return. ## How n8n populates the tool description n8n uses the node name (select the name to edit) and **Description of Data** parameter to populate the tool description for AI agents using the following format: > Useful for when you need to answer questions about [node name]. Whenever you need information about [Description of Data], you should ALWAYS use this. Input should be a fully formed question. Spaces in the node name are converted to underscores in the tool description. Avoid special characters in node names Using special characters in the node name will cause errors when the agent runs: Use only alphanumeric characters, spaces, dashes, and underscores in node names. ## Related resources View [example workflows and related content](https://n8n.io/integrations/vector-store-tool/) on n8n's website. Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Wikipedia node The Wikipedia node is a [tool](../../../../../glossary/#ai-tool) that allows an [agent](../../../../../glossary/#ai-agent) to search and return information from Wikipedia. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples **Respond to WhatsApp Messages with AI Like a Pro!** by Jimleuk [View template details](https://n8n.io/workflows/2466-respond-to-whatsapp-messages-with-ai-like-a-pro/) **AI chatbot that can search the web** by n8n Team [View template details](https://n8n.io/workflows/1959-ai-chatbot-that-can-search-the-web/) **Write a WordPress post with AI (starting from a few keywords)** by Giulio [View template details](https://n8n.io/workflows/2187-write-a-wordpress-post-with-ai-starting-from-a-few-keywords/) [Browse Wikipedia integration templates](https://n8n.io/integrations/wikipedia/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Wolfram|Alpha tool node Use the Wolfram|Alpha tool node to connect your [agents](../../../../../glossary/#ai-agent) and [chains](../../../../../glossary/#ai-chain) to Wolfram|Alpha's computational intelligence engine. Credentials You can find authentication information for this node [here](../../../credentials/wolframalpha/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Templates and examples [Browse Wolfram|Alpha integration templates](https://n8n.io/integrations/wolframoralpha/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [Wolfram|Alpha's documentation](https://products.wolframalpha.com/api) for more information about the service. You can also view [LangChain's documentation on their WolframAlpha Tool](https://js.langchain.com/docs/integrations/tools/wolframalpha/). View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Call n8n Workflow Tool node The Call n8n Workflow Tool node is a [tool](../../../../../glossary/#ai-tool) that allows an [agent](../../../../../glossary/#ai-agent) to run another n8n workflow and fetch its output data. On this page, you'll find the node parameters for the Call n8n Workflow Tool node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Description Enter a custom code a description. This tells the agent when to use this tool. For example: > Call this tool to get a random color. The input should be a string with comma separated names of colors to exclude. ### Source Tell n8n which workflow to call. You can choose either: - **Database** to select the workflow from a list or enter a workflow ID. - **Define Below** and copy in a complete [workflow JSON](../../../../../workflows/export-import/). ### Workflow Inputs When using **Database** as workflow source, once you choose a sub-workflow (and define the **Workflow Input Schema** in the sub-workflow), you can define the **Workflow Inputs**. Select the **Refresh** button to pull in the input fields from the sub-workflow. You can define the workflow input values using any combination of the following options: - providing fixed values - using expressions to reference data from the current workflow - [letting the AI model specify the parameter](../../../../../advanced-ai/examples/using-the-fromai-function/) by selecting the button AI button on the right side of the field - using the [`$fromAI()` function](../../../../../advanced-ai/examples/using-the-fromai-function/#use-the-fromai-function) in expressions to control the way the model fills in data and to mix AI generated input with other custom input To reference data from the current workflow, drag fields from the input panel to the field with the Expressions mode selected. To get started with the `$fromAI()` function, select the "Let the model define this parameter" button on the right side of the field and then use the **X** on the box to revert to user-defined values. The field will change to an expression field pre-populated with the `$fromAI()` expression. From here, you can customize the expression to add other static or dynamic content, or tweak the `$fromAI()` function parameters. ## Templates and examples **AI agent that can scrape webpages** by Eduard [View template details](https://n8n.io/workflows/2006-ai-agent-that-can-scrape-webpages/) **Build Your First AI Data Analyst Chatbot** by Solomon [View template details](https://n8n.io/workflows/3050-build-your-first-ai-data-analyst-chatbot/) **Create a Branded AI-Powered Website Chatbot** by Wayne Simpson [View template details](https://n8n.io/workflows/2786-create-a-branded-ai-powered-website-chatbot/) [Browse Call n8n Workflow Tool integration templates](https://n8n.io/integrations/workflow-tool/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's documentation on tools](https://langchain-ai.github.io/langgraphjs/how-tos/tool-calling/) for more information about tools in LangChain. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. # Ollama Chat Model node The Ollama Chat Model node allows you use local Llama 2 models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the Ollama Chat Model node, and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/ollama/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model that generates the completion. Choose from: - **Llama2** - **Llama2 13B** - **Llama2 70B** - **Llama2 Uncensored** Refer to the Ollama [Models Library documentation](https://ollama.com/library) for more information about available models. ## Node options - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **Chat with local LLMs using n8n and Ollama** by Mihai Farcas [View template details](https://n8n.io/workflows/2384-chat-with-local-llms-using-n8n-and-ollama/) **🔐🦙🤖 Private & Local Ollama Self-Hosted AI Assistant** by Joseph LePage [View template details](https://n8n.io/workflows/2729-private-and-local-ollama-self-hosted-ai-assistant/) **Auto Categorise Outlook Emails with AI** by Wayne Simpson [View template details](https://n8n.io/workflows/2454-auto-categorise-outlook-emails-with-ai/) [Browse Ollama Chat Model integration templates](https://n8n.io/integrations/ollama-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Ollama Chat Model documentation](https://js.langchain.com/docs/integrations/chat/ollama/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Ollama Chat Model node common issues Here are some common errors and issues with the [Ollama Chat Model node](../) and steps to resolve or troubleshoot them. ## Processing parameters The Ollama Chat Model node is a [sub-node](../../../../../../glossary/#sub-node-n8n). Sub-nodes behave differently than other nodes when processing multiple items using expressions. Most nodes, including [root nodes](../../../../../../glossary/#root-node-n8n), take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression `{{ $json.name }}` always resolves to the first name. ## Can't connect to a remote Ollama instance The Ollama Chat Model node supports Bearer token authentication for connecting to remote Ollama instances behind authenticated proxies (such as Open WebUI). For remote authenticated connections, configure both the remote URL and API key in your Ollama credentials. Follow the [Ollama credentials instructions](../../../../credentials/ollama/) for more information. ## Can't connect to a local Ollama instance when using Docker The Ollama Chat Model node connects to a locally hosted Ollama instance using the base URL defined by [Ollama credentials](../../../../credentials/ollama/). When you run either n8n or Ollama in Docker, you need to configure the network so that n8n can connect to Ollama. Ollama typically listens for connections on `localhost`, the local network address. In Docker, by default, each container has its own `localhost` which is only accessible from within the container. If either n8n or Ollama are running in containers, they won't be able to connect over `localhost`. The solution depends on how you're hosting the two components. ### If only Ollama is in Docker If only Ollama is running in Docker, configure Ollama to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). When running the container, [publish the ports](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/) with the `-p` flag. By default, Ollama runs on port 11434, so your Docker command should look like this: ``` docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` When configuring [Ollama credentials](../../../../credentials/ollama/), the `localhost` address should work without a problem (set the **base URL** to `http://localhost:11434`). ### If only n8n is in Docker If only n8n is running in Docker, configure Ollama to listen on all interfaces by binding to `0.0.0.0` on the host. If you are running n8n in Docker on **Linux**, use the `--add-host` flag to map `host.docker.internal` to `host-gateway` when you start the container. For example: ``` docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n ``` If you are using Docker Desktop, this is automatically configured for you. When configuring [Ollama credentials](../../../../credentials/ollama/), use `host.docker.internal` as the host address instead of `localhost`. For example, to bind to the default port 11434, you could set the base URL to `http://host.docker.internal:11434`. ### If Ollama and n8n are running in separate Docker containers If both n8n and Ollama are running in Docker in separate containers, you can use Docker networking to connect them. Configure Ollama to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). When configuring [Ollama credentials](../../../../credentials/ollama/), use the Ollama container's name as the host address instead of `localhost`. For example, if you call the Ollama container `my-ollama` and it listens on the default port 11434, you would set the base URL to `http://my-ollama:11434`. ### If Ollama and n8n are running in the same Docker container If Ollama and n8n are running in the same Docker container, the `localhost` address doesn't need any special configuration. You can configure Ollama to listen on localhost and configure the base URL in the [Ollama credentials in n8n](../../../../credentials/ollama/) to use localhost: `http://localhost:11434`. ## Error: connect ECONNREFUSED ::1:11434 This error occurs when your computer has IPv6 enabled, but Ollama is listening to an IPv4 address. To fix this, change the base URL in your [Ollama credentials](../../../../credentials/ollama/) to connect to `127.0.0.1`, the IPv4-specific local address, instead of the `localhost` alias that can resolve to either IPv4 or IPv6: `http://127.0.0.1:11434`. ## Ollama and HTTP/HTTPS proxies Ollama doesn't support custom HTTP agents in its configuration. This makes it difficult to use Ollama behind custom HTTP/HTTPS proxies. Depending on your proxy configuration, it might not work at all, despite setting the `HTTP_PROXY` or `HTTPS_PROXY` environment variables. Refer to [Ollama's FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy) for more information. # OpenAI Chat Model node Use the OpenAI Chat Model node to use OpenAI's chat models with conversational [agents](../../../../../glossary/#ai-agent). On this page, you'll find the node parameters for the OpenAI Chat Model node and links to more resources. Credentials You can find authentication information for this node [here](../../../credentials/openai/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters ### Model Select the model to use to generate the completion. n8n dynamically loads models from OpenAI, and you'll only see the models available to your account. ### Use Responses API OpenAI provides two endpoints for generating output from a model: - **Chat Completions**: The Chat Completions API endpoint generates a model response from a list of messages that comprise a conversation. The API requires the user to handle conversation state manually, for example by adding a [Simple Memory](../n8n-nodes-langchain.memorybufferwindow/) subnode. For new projects, OpenAI recommends to use the Responses API. - **Responses**: The Responses API is an agentic loop, allowing the model to call multiple built-in tools within the span of one API request. It also supports persistent conversations by passing a `conversation_id`. Toggle to **Use Responses API** if you want the model to generate output using the Responses API. Otherwise, the OpenAI Chat Model node will default to using the Chat Completions API. Refer to the OpenAI documentation for a [comparison of the Chat Completions and Responses APIs](https://platform.openai.com/docs/guides/migrate-to-responses). ### Built-in Tools The OpenAI Responses API provides a range of [built-in tools](https://platform.openai.com/docs/guides/tools) to enrich the model's response. Toggle to **Use Responses API** if you want the model to have access to the following built-in tools: - **Web Search**: Allows models to search the web for the latest information before generating a response. - **File Search**: Allow models to search your knowledgebase from previously uploaded files for relevant information before generating a response. Refer to the [OpenAI documentation](https://platform.openai.com/docs/guides/tools-file-search) for more information. - **Code Interpreter**: Allows models to write and run Python code in a sandboxed environment. Use with AI Agent node Built-in tools are only supported when using the OpenAI Chat Model node in combination with the AI Agent node. Built-in tools aren't available when using the OpenAI Chat Model node in combination with a Basic LLM Chain node, for example. ## Node options Use these options to further refine the node's behavior. The following options are available whether you use the Responses API to generate model output or not. ### Frequency Penalty Use this option to control the chances of the model repeating itself. Higher values reduce the chance of the model repeating itself. ### Maximum Number of Tokens Enter the maximum number of tokens used, which sets the completion length. ### Presence Penalty Use this option to control the chances of the model talking about new topics. Higher values increase the chance of the model talking about new topics. ### Sampling Temperature Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. ### Timeout Enter the maximum request time in milliseconds. ### Max Retries Enter the maximum number of times to retry a request. ### Top P Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Additional node options (Responses API only) The following, additional options are available when toggling to **Use Responses API**. ### Conversation ID The conversation that this response belongs to. Input items and output items from this response are automatically added to this conversation after this response completes. ### Prompt Cache Key Use this key for caching similar requests to optimize cache hit rates. ### Safety Identifier Apply an identifier to track users who may violate usage policies. ### Service Tier Select the service tier that fits your needs: Auto, Flex, Default, or Priority. ### Metadata A set of key-value pairs for storing structured information. You can attach up to 16 pairs to an object, which is useful for adding custom data that can be used for searching by the API or in the dashboard. ### Top Logprobs Define an integer between 0 and 20 specifying the number of most likely tokens to return at each token position, each with an associated log probability. ### Output Format Choose a response format: Text, JSON Schema, or JSON Object. Use of JSON Schema is recommended, if you want to receive data in JSON format. ### Prompt Configure the prompt filled with a unique ID, its version, and substitutable variables. Prompts are configured through the OpenAI dashboard. ## Templates and examples **AI agent chat** by n8n Team [View template details](https://n8n.io/workflows/1954-ai-agent-chat/) **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Angie, personal AI assistant with Telegram voice and text** by Derek Cheung [View template details](https://n8n.io/workflows/2462-angie-personal-ai-assistant-with-telegram-voice-and-text/) [Browse OpenAI Chat Model integration templates](https://n8n.io/integrations/openai-chat-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's OpenAI documentation](https://js.langchain.com/docs/integrations/chat/openai/) for more information about the service. Refer to [OpenAI documentation](https://platform.openai.com/docs/api-reference/responses/create) for more information about the parameters. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). # OpenAI Chat Model node common issues Here are some common errors and issues with the [OpenAI Chat Model node](../) and steps to resolve or troubleshoot them. ## Processing parameters The OpenAI Chat Model node is a [sub-node](../../../../../../glossary/#sub-node-n8n). Sub-nodes behave differently than other nodes when processing multiple items using expressions. Most nodes, including [root nodes](../../../../../../glossary/#root-node-n8n), take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression `{{ $json.name }}` always resolves to the first name. ## The service is receiving too many requests from you This error displays when you've exceeded [OpenAI's rate limits](https://platform.openai.com/docs/guides/rate-limits). There are two ways to work around this issue: 1. Split your data up into smaller chunks using the [Loop Over Items](../../../../core-nodes/n8n-nodes-base.splitinbatches/) node and add a [Wait](../../../../core-nodes/n8n-nodes-base.wait/) node at the end for a time amount that will help. Copy the code below and paste it into a workflow to use as a template. ``` { "nodes": [ { "parameters": {}, "id": "35d05920-ad75-402a-be3c-3277bff7cc67", "name": "When clicking ‘Execute workflow’", "type": "n8n-nodes-base.manualTrigger", "typeVersion": 1, "position": [ 880, 400 ] }, { "parameters": { "batchSize": 500, "options": {} }, "id": "ae9baa80-4cf9-4848-8953-22e1b7187bf6", "name": "Loop Over Items", "type": "n8n-nodes-base.splitInBatches", "typeVersion": 3, "position": [ 1120, 420 ] }, { "parameters": { "resource": "chat", "options": {}, "requestOptions": {} }, "id": "a519f271-82dc-4f60-8cfd-533dec580acc", "name": "OpenAI", "type": "n8n-nodes-base.openAi", "typeVersion": 1, "position": [ 1380, 440 ] }, { "parameters": { "unit": "minutes" }, "id": "562d9da3-2142-49bc-9b8f-71b0af42b449", "name": "Wait", "type": "n8n-nodes-base.wait", "typeVersion": 1, "position": [ 1620, 440 ], "webhookId": "714ab157-96d1-448f-b7f5-677882b92b13" } ], "connections": { "When clicking ‘Execute workflow’": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] }, "Loop Over Items": { "main": [ null, [ { "node": "OpenAI", "type": "main", "index": 0 } ] ] }, "OpenAI": { "main": [ [ { "node": "Wait", "type": "main", "index": 0 } ] ] }, "Wait": { "main": [ [ { "node": "Loop Over Items", "type": "main", "index": 0 } ] ] } }, "pinData": {} } ``` 1. Use the [HTTP Request](../../../../core-nodes/n8n-nodes-base.httprequest/) node with the built-in batch-limit option against the [OpenAI API](https://platform.openai.com/docs/quickstart) instead of using the OpenAI node. ## Insufficient quota Quota issues There are a number of OpenAI issues surrounding quotas, including failures when quotas have been recently topped up. To avoid these issues, ensure that there is credit in the account and issue a new API key from the [API keys screen](https://platform.openai.com/settings/organization/api-keys). This error displays when your OpenAI account doesn't have enough credits or capacity to fulfill your request. This may mean that your OpenAI trial period has ended, that your account needs more credit, or that you've gone over a usage limit. To troubleshoot this error, on your [OpenAI settings](https://platform.openai.com/settings/organization/billing/overview) page: - Select the correct organization for your API key in the first selector in the upper-left corner. - Select the correct project for your API key in the second selector in the upper-left corner. - Check the organization-level [billing overview](https://platform.openai.com/settings/organization/billing/overview) page to ensure that the organization has enough credit. Double-check that you select the correct organization for this page. - Check the organization-level [usage limits](https://platform.openai.com/settings/organization/limits) page. Double-check that you select the correct organization for this page and scroll to the **Usage limits** section to verify that you haven't exceeded your organization's usage limits. - Check your OpenAI project's usage limits. Double-check that you select the correct project in the second selector in the upper-left corner. Select **Project** > **Limits** to view or change the project limits. - Check that the [OpenAI API](https://status.openai.com/) is operating as expected. Balance waiting period After topping up your balance, there may be a delay before your OpenAI account reflects the new balance. In n8n: - check that the [OpenAI credentials](../../../../credentials/openai/) use a valid [OpenAI API key](https://platform.openai.com/api-keys) for the account you've added money to - ensure that you connect the [OpenAI node](../../../../app-nodes/n8n-nodes-langchain.openai/) to the correct [OpenAI credentials](../../../../credentials/openai/) If you find yourself frequently running out of account credits, consider turning on auto recharge in your [OpenAI billing settings](https://platform.openai.com/settings/organization/billing/overview) to automatically reload your account with credits when your balance reaches $0. ## Bad request - please check your parameters This error displays when the request results in an error but n8n wasn't able to interpret the error message from OpenAI. To begin troubleshooting, try running the same operation using the [HTTP Request](../../../../core-nodes/n8n-nodes-base.httprequest/) node, which should provide a more detailed error message. # Ollama Model node The Ollama Model node allows you use local Llama 2 models. On this page, you'll find the node parameters for the Ollama Model node, and links to more resources. This node lacks tools support, so it won't work with the [AI Agent](../../root-nodes/n8n-nodes-langchain.agent/) node. Instead, connect it with the [Basic LLM Chain](../../root-nodes/n8n-nodes-langchain.chainllm/) node. Credentials You can find authentication information for this node [here](../../../credentials/ollama/). Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Model**: Select the model that generates the completion. Choose from: - **Llama2** - **Llama2 13B** - **Llama2 70B** - **Llama2 Uncensored** Refer to the Ollama [Models Library documentation](https://ollama.com/library) for more information about available models. ## Node options - **Sampling Temperature**: Use this option to control the randomness of the sampling process. A higher temperature creates more diverse sampling, but increases the risk of hallucinations. - **Top K**: Enter the number of token choices the model uses to generate the next token. - **Top P**: Use this option to set the probability the completion should use. Use a lower value to ignore less probable options. ## Templates and examples **Chat with local LLMs using n8n and Ollama** by Mihai Farcas [View template details](https://n8n.io/workflows/2384-chat-with-local-llms-using-n8n-and-ollama/) **🔐🦙🤖 Private & Local Ollama Self-Hosted AI Assistant** by Joseph LePage [View template details](https://n8n.io/workflows/2729-private-and-local-ollama-self-hosted-ai-assistant/) **Auto Categorise Outlook Emails with AI** by Wayne Simpson [View template details](https://n8n.io/workflows/2454-auto-categorise-outlook-emails-with-ai/) [Browse Ollama Model integration templates](https://n8n.io/integrations/ollama-model/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChains's Ollama documentation](https://js.langchain.com/docs/integrations/llms/ollama/) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). ## Self-hosted AI Starter Kit New to working with AI and using self-hosted n8n? Try n8n's [self-hosted AI Starter Kit](../../../../../hosting/starter-kits/ai-starter-kit/) to get started with a proof-of-concept or demo playground using Ollama, Qdrant, and PostgreSQL. # Ollama Model node common issues Here are some common errors and issues with the [Ollama Model node](../) and steps to resolve or troubleshoot them. ## Processing parameters The Ollama Model node is a [sub-node](../../../../../../glossary/#sub-node-n8n). Sub-nodes behave differently than other nodes when processing multiple items using expressions. Most nodes, including [root nodes](../../../../../../glossary/#root-node-n8n), take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression `{{ $json.name }}` always resolves to the first name. ## Can't connect to a remote Ollama instance The Ollama Model node supports Bearer token authentication for connecting to remote Ollama instances behind authenticated proxies (such as Open WebUI). For remote authenticated connections, configure both the remote URL and API key in your Ollama credentials. Follow the [Ollama credentials instructions](../../../../credentials/ollama/) for more information. ## Can't connect to a local Ollama instance when using Docker The Ollama Model node connects to a locally hosted Ollama instance using the base URL defined by [Ollama credentials](../../../../credentials/ollama/). When you run either n8n or Ollama in Docker, you need to configure the network so that n8n can connect to Ollama. Ollama typically listens for connections on `localhost`, the local network address. In Docker, by default, each container has its own `localhost` which is only accessible from within the container. If either n8n or Ollama are running in containers, they won't be able to connect over `localhost`. The solution depends on how you're hosting the two components. ### If only Ollama is in Docker If only Ollama is running in Docker, configure Ollama to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). When running the container, [publish the ports](https://docs.docker.com/get-started/docker-concepts/running-containers/publishing-ports/) with the `-p` flag. By default, Ollama runs on port 11434, so your Docker command should look like this: ``` docker run -d -v ollama:/root/.ollama -p 11434:11434 --name ollama ollama/ollama ``` When configuring [Ollama credentials](../../../../credentials/ollama/), the `localhost` address should work without a problem (set the **base URL** to `http://localhost:11434`). ### If only n8n is in Docker If only n8n is running in Docker, configure Ollama to listen on all interfaces by binding to `0.0.0.0` on the host. If you are running n8n in Docker on **Linux**, use the `--add-host` flag to map `host.docker.internal` to `host-gateway` when you start the container. For example: ``` docker run -it --rm --add-host host.docker.internal:host-gateway --name n8n -p 5678:5678 -v n8n_data:/home/node/.n8n docker.n8n.io/n8nio/n8n ``` If you are using Docker Desktop, this is automatically configured for you. When configuring [Ollama credentials](../../../../credentials/ollama/), use `host.docker.internal` as the host address instead of `localhost`. For example, to bind to the default port 11434, you could set the base URL to `http://host.docker.internal:11434`. ### If Ollama and n8n are running in separate Docker containers If both n8n and Ollama are running in Docker in separate containers, you can use Docker networking to connect them. Configure Ollama to listen on all interfaces by binding to `0.0.0.0` inside of the container (the official images are already configured this way). When configuring [Ollama credentials](../../../../credentials/ollama/), use the Ollama container's name as the host address instead of `localhost`. For example, if you call the Ollama container `my-ollama` and it listens on the default port 11434, you would set the base URL to `http://my-ollama:11434`. ### If Ollama and n8n are running in the same Docker container If Ollama and n8n are running in the same Docker container, the `localhost` address doesn't need any special configuration. You can configure Ollama to listen on localhost and configure the base URL in the [Ollama credentials in n8n](../../../../credentials/ollama/) to use localhost: `http://localhost:11434`. ## Error: connect ECONNREFUSED ::1:11434 This error occurs when your computer has IPv6 enabled, but Ollama is listening to an IPv4 address. To fix this, change the base URL in your [Ollama credentials](../../../../credentials/ollama/) to connect to `127.0.0.1`, the IPv4-specific local address, instead of the `localhost` alias that can resolve to either IPv4 or IPv6: `http://127.0.0.1:11434`. ## Ollama and HTTP/HTTPS proxies Ollama doesn't support custom HTTP agents in its configuration. This makes it difficult to use Ollama behind custom HTTP/HTTPS proxies. Depending on your proxy configuration, it might not work at all, despite setting the `HTTP_PROXY` or `HTTPS_PROXY` environment variables. Refer to [Ollama's FAQ](https://github.com/ollama/ollama/blob/main/docs/faq.md#how-do-i-use-ollama-behind-a-proxy) for more information. # Simple Memory node Use the Simple Memory node to [persist](../../../../../glossary/#ai-memory) chat history in your workflow. On this page, you'll find a list of operations the Simple Memory node supports, and links to more resources. Don't use this node if running n8n in queue mode If your n8n instance uses [queue mode](../../../../../hosting/scaling/queue-mode/), this node doesn't work in an active production workflow. This is because n8n can't guarantee that every call to Simple Memory will go to the same worker. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters Configure these parameters to configure the node: - **Session Key**: Enter the key to use to store the memory in the workflow data. - **Context Window Length**: Enter the number of previous interactions to consider for context. ## Templates and examples [Browse Simple Memory node documentation integration templates](https://n8n.io/integrations/window-buffer-memory/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's Buffer Window Memory documentation](https://v03.api.js.langchain.com/classes/langchain.memory.BufferWindowMemory.html) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). # Simple Memory node common issues Here are some common errors and issues with the [Simple Memory node](../) and steps to resolve or troubleshoot them. ## Single memory instance If you add more than one Simple Memory node to your workflow, all nodes access the same memory instance by default. Be careful when doing destructive actions that override existing memory contents, such as the override all messages operation in the [Chat Memory Manager](../../n8n-nodes-langchain.memorymanager/) node. If you want more than one memory instance in your workflow, set different session IDs in different memory nodes. ## Managing the Session ID In most cases, the `sessionId` is automatically retrieved from the **On Chat Message** trigger. But you may run into an error with the phrase `No sessionId`. If you have this error, first check the output of your Chat trigger to ensure it includes a `sessionId`. If you're not using the **On Chat Message** trigger, you'll need to manage sessions manually. For testing purposes, you can use a static key like `my_test_session`. If you use this approach, be sure to set up proper session management before publishing the workflow to avoid potential issues in a live environment. # Structured Output Parser node Use the Structured Output Parser node to return fields based on a JSON Schema. On this page, you'll find the node parameters for the Structured Output Parser node, and links to more resources. Parameter resolution in sub-nodes Sub-nodes behave differently to other nodes when processing multiple items using an expression. Most nodes, including root nodes, take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five `name` values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five `name` values, the expression `{{ $json.name }}` always resolves to the first name. ## Node parameters - **Schema Type**: Define the output structure and validation. You have two options to provide the schema: 1. **Generate from JSON Example**: Input an example JSON object to automatically generate the schema. The node uses the object property types and names. It ignores the actual values. n8n treats every field as mandatory when generating schemas from JSON examples. 1. **Define using JSON Schema**: Manually input the JSON schema. Read the JSON Schema [guides and examples](https://json-schema.org/learn/miscellaneous-examples) for help creating a valid JSON schema. Please note that we don't support references (using `$ref`) in JSON schemas. ## Templates and examples **Generate AI Viral Videos with Seedance and Upload to TikTok, YouTube & Instagram** by Dr. Firas [View template details](https://n8n.io/workflows/5338-generate-ai-viral-videos-with-seedance-and-upload-to-tiktok-youtube-and-instagram/) **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **AI-Powered Social Media Content Generator & Publisher** by Amjid Ali [View template details](https://n8n.io/workflows/2950-ai-powered-social-media-content-generator-and-publisher/) [Browse Structured Output Parser integration templates](https://n8n.io/integrations/structured-output-parser/), or [search all templates](https://n8n.io/workflows/) ## Related resources Refer to [LangChain's output parser documentation](https://js.langchain.com/docs/concepts/output_parsers) for more information about the service. View n8n's [Advanced AI](../../../../../advanced-ai/) documentation. ## Common issues For common questions or issues and suggested solutions, refer to [Common issues](common-issues/). # Structured Output Parser node common issues Here are some common errors and issues with the [Structured Output Parser node](../) and steps to resolve or troubleshoot them. ## Processing parameters The Structured Output Parser node is a [sub-node](../../../../../../glossary/#sub-node-n8n). Sub-nodes behave differently than other nodes when processing multiple items using expressions. Most nodes, including [root nodes](../../../../../../glossary/#root-node-n8n), take any number of items as input, process these items, and output the results. You can use expressions to refer to input items, and the node resolves the expression for each item in turn. For example, given an input of five name values, the expression `{{ $json.name }}` resolves to each name in turn. In sub-nodes, the expression always resolves to the first item. For example, given an input of five name values, the expression `{{ $json.name }}` always resolves to the first name. ## Adding the structured output parser node to AI nodes You can attach output parser nodes to select [AI root nodes](../../../root-nodes/). To add the Structured Output Parser to a node, enable the **Require Specific Output Format** option in the AI root node you wish to format. Once the option is enabled, a new **output parser** attachment point is displayed. Click the **output parser** attachment point to add the Structured Output Parser node to the node. ## Using the structured output parser to format intermediary steps The Structured Output Parser node structures the final output from AI agents. It's not intended to structure intermediary output to pass to other AI tools or stages. To request a specific format for intermediary output, include the response structure in the **System Message** for the **AI Agent**. The message can include either a schema or example response for the agent to use as a template for its results. ## Structuring output from agents Structured output parsing is often not reliable when working with [agents](../../../root-nodes/n8n-nodes-langchain.agent/). If your workflow uses agents, n8n recommends using a separate [LLM-chain](../../../root-nodes/n8n-nodes-langchain.chainllm/) to receive the data from the agent and parse it. This leads to better, more consistent results than parsing directly in the agent workflow. # Core nodes library This section provides information about n8n's core [nodes](../../../glossary/#node-n8n). # Activation Trigger node The Activation Trigger node gets triggered when an event gets fired by n8n or a workflow. Warning n8n has deprecated the Activation Trigger node and replaced it with two new nodes: the [n8n Trigger node](../n8n-nodes-base.n8ntrigger/) and the [Workflow Trigger node](../n8n-nodes-base.workflowtrigger/). For more details, check out the entry in the [breaking changes](https://github.com/n8n-io/n8n/blob/master/packages/cli/BREAKING-CHANGES.md#01170) page. Keep in mind If you want to use the Activation Trigger node for a workflow, add the node to the workflow. You don't have to create a separate workflow. The Activation Trigger node gets triggered for the workflow that it gets added to. You can use the Activation Trigger node to trigger a workflow to notify the state of the workflow. ## Node parameters - Events - **Activation**: Run when the workflow gets published - **Start**: Run when n8n starts or restarts - **Update**: Run when the workflow gets saved while it's active ## Templates and examples [Browse Activation Trigger integration templates](https://n8n.io/integrations/activation-trigger/), or [search all templates](https://n8n.io/workflows/) # Aggregate Use the Aggregate node to take separate items, or portions of them, and group them together into individual items. ## Node parameters To begin using the node, select the **Aggregate** you'd like to use: - [**Individual Fields**](#individual-fields): Aggregate individual fields separately. - [**All Item Data**](#all-item-data): Aggregate all item data into a single list. ### Individual Fields - **Input Field Name**: Enter the name of the field in the input data to aggregate together. - **Rename Field**: This toggle controls whether to give the field a different name in the aggregated output data. Turn this on to add a different field name. If you're aggregating multiple fields, you must provide new output field names. You can't leave multiple fields undefined. - **Output Field Name**: This field is displayed when you turn on **Rename Field**. Enter the field name for the aggregated output data. Refer to [Node options](#node-options) for more configuration options. ### All Item Data - **Put Output in Field**: Enter the name of the field to output the data in. - **Include**: Select which fields to include in the output. Choose from: - **All fields**: The output includes data from all fields with no further parameters. - **Specified Fields**: If you select this option, enter a comma-separated list of fields the output should include data from in the **Fields To Include** parameter. The output will include only the fields in this list. - **All Fields Except**: If you select this option, enter a comma-separated list of fields the output should exclude data from in the **Fields To Exclude** parameter. The output will include all fields not in this list. Refer to [Node options](#node-options) for more configuration options. ## Node options You can further configure this node using these **Options**: - **Disable Dot Notation**: The node displays this toggle when you select the **Individual Fields** Aggregate. It controls whether to disallow referencing child fields using `parent.child` in the field name (turned on), or allow it (turned off, default). - **Merge Lists**: The node displays this toggle when you select the **Individual Fields** Aggregate. Turn it on if the field to aggregate is a list and you want to output a single flat list rather than a list of lists. - **Include Binaries**: The node displays this toggle for both Aggregate types. Turn it on if you want to include binary data from the input in the new output. - **Keep Missing And Null Values**: The node displays this toggle when you select the **Individual Fields** Aggregate. Turn it on to add a null (empty) entry in the output list when there is a null or missing value in the input. If turned off, the output ignores null or empty values. ## Templates and examples **✨🤖Automate Multi-Platform Social Media Content Creation with AI** by Joseph LePage [View template details](https://n8n.io/workflows/3066-automate-multi-platform-social-media-content-creation-with-ai/) **Scrape business emails from Google Maps without the use of any third party APIs** by Akram Kadri [View template details](https://n8n.io/workflows/2567-scrape-business-emails-from-google-maps-without-the-use-of-any-third-party-apis/) **Build Your First AI Data Analyst Chatbot** by Solomon [View template details](https://n8n.io/workflows/3050-build-your-first-ai-data-analyst-chatbot/) [Browse Aggregate integration templates](https://n8n.io/integrations/aggregate/), or [search all templates](https://n8n.io/workflows/) ## Related resources Learn more about [data structure and data flow](../../../../data/) in n8n workflows. # AI Transform Use the AI Transform node to generate code snippets based on your prompt. The AI is context-aware, understanding the workflow’s nodes and their data types. Feature availability Available only on [Cloud plans](../../../../manage-cloud/overview/). ## Node parameters ### Instructions Enter your prompt for the AI and click the **Generate code** button to automatically populate the **Transformation Code**. For example, you can specify how you want to process or categorize your data. Refer to [Writing good prompts](../../../../code/ai-code/#writing-good-prompts) for more information. The prompt should be in plain English and under 500 characters. ### Transformation Code The code snippet generated by the node is read-only. To edit this code, adjust your prompt in **Instructions** or copy and paste it into a [Code](../n8n-nodes-base.code/) node. ## Templates and examples **Customer Support WhatsApp Bot with Google Docs Knowledge Base and Gemini AI** by Tharwat Mohamed [View template details](https://n8n.io/workflows/4966-customer-support-whatsapp-bot-with-google-docs-knowledge-base-and-gemini-ai/) **Explore n8n Nodes in a Visual Reference Library** by I versus AI [View template details](https://n8n.io/workflows/3891-explore-n8n-nodes-in-a-visual-reference-library/) **Parse Gmail Inbox and Transform into Todoist tasks with Solve Propositions** by Łukasz [View template details](https://n8n.io/workflows/3507-parse-gmail-inbox-and-transform-into-todoist-tasks-with-solve-propositions/) [Browse AI Transform integration templates](https://n8n.io/integrations/ai-transform/), or [search all templates](https://n8n.io/workflows/) # Compare Datasets The Compare Datasets node helps you compare data from two input streams. ## Node parameters 1. Decide which fields to compare. In **Input A Field**, enter the name of the field you want to use from input stream A. In **Input B Field**, enter the name of the field you want to use from input stream B. 1. **Optional**: You can compare by multiple fields. Select **Add Fields to Match** to set up more comparisons. 1. Choose how to handle differences between the datasets. In **When There Are Differences**, select one of the following: - **Use Input A Version** to treat input stream A as the source of truth. - **Use Input B Version** to treat input stream B as the source of truth. - **Use a Mix of Versions** to use different inputs for different fields. - Use **Prefer** to select either **Input A Version** or **Input B Version** as the main source of truth. - Enter input fields that are exceptions to **For Everything Except** to pull from the other input source. To add multiple input fields, enter a comma-separated list. - **Include Both Versions** to include both input streams in the output, which may make the structure more complex. 1. Decide whether to use **Fuzzy Compare**. When turned on, the comparison will tolerate small type differences when comparing fields. For example, the number 3 and the string `3` are treated as the same with **Fuzzy Compare** turned on, but wouldn't be treated the same with it turned off. ## Understand item comparison Item comparison is a two stage process: 1. n8n checks if the values of the fields you selected to compare match across both inputs. 1. If the fields to compare match, n8n then compares all fields within the items, to determine if the items are the same or different. ## Node options Use the node **Options** to refine your comparison or tweak comparison behavior. ### Fields to Skip Comparing Enter field names that you want to ignore in the comparison. For example, if you compare the two datasets below using `person.language` as the **Fields to Match**, n8n returns them as different. If you add `person.name` to **Fields to Skip Comparing**, n8n returns them as matching. ``` // Input 1 [ { "person": { "name": "Stefan", "language": "de" } }, { "person": { "name": "Jim", "language": "en" } }, { "person": { "name": "Hans", "language": "de" } } ] // Input 2 [ { "person": { "name": "Sara", "language": "de" } }, { "person": { "name": "Jane", "language": "en" } }, { "person": { "name": "Harriet", "language": "de" } } ] ``` ### Disable Dot Notation Whether to disallow referencing child fields using `parent.child` in the field name (turned on) or allow it (turned off, default). ### Multiple Matches Choose how to handle duplicate data. The default is **Include All Matches**. You can choose **Include First Match Only**. For example, given these two datasets: ``` // Input 1 [ { "fruit": { "type": "apple", "color": "red" } }, { "fruit": { "type": "apple", "color": "red" } }, { "fruit": { "type": "banana", "color": "yellow" } } ] // Input 2 [ { "fruit": { "type": "apple", "color": "red" } }, { "fruit": { "type": "apple", "color": "red" } }, { "fruit": { "type": "banana", "color": "yellow" } } ] ``` n8n returns three items in the **Same Branch** tab. The data is the same in both branches. If you select **Include First Match Only**, n8n returns two items, in the **Same Branch** tab. The data is the same in both branches, but n8n only returns the first occurrence of the matching "apple" items. ## Understand the output There are four output options: - **In A only Branch**: Contains data that occurs only in the first input. - **Same Branch**: Contains data that's the same in both inputs. - **Different Branch**: Contains data that's different between inputs. - **In B only Branch**: Contains data that occurs only in the second output. ## Templates and examples **Intelligent Email Organization with AI-Powered Content Classification for Gmail** by Niranjan G [View template details](https://n8n.io/workflows/4557-intelligent-email-organization-with-ai-powered-content-classification-for-gmail/) **Two way sync Pipedrive and MySQL** by n8n Team [View template details](https://n8n.io/workflows/1822-two-way-sync-pipedrive-and-mysql/) **Realtime Notion Todoist 2-way sync with Redis** by Mario [View template details](https://n8n.io/workflows/2772-realtime-notion-todoist-2-way-sync-with-redis/) [Browse Compare Datasets integration templates](https://n8n.io/integrations/compare-datasets/), or [search all templates](https://n8n.io/workflows/) # Compression Use the Compression node to compress and decompress files. Supports Zip and Gzip formats. ## Node parameters This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). The node parameters depend on which **Operation** you select. Choose to: - **Compress**: Create a compressed file from your input data. - **Decompress**: Decompress an existing compressed file. Refer to the sections below for parameters specific to each **Operation**. ### Compress - **Input Binary Field(s)**: Enter the name of the fields in the input data that contain the binary files you want to compress. To compress more than one file, use a comma-separated list. - **Output Format**: Choose whether to format the compressed output as **Zip** or **Gzip**. - **File Name**: Enter the name of the zip file the node creates. - **Put Output File in Field**: Enter the name of the field in the output data to contain the file. ### Decompress - **Put Output File in Field**: Enter the name of the fields in the input data that contain the binary files you want to decompress. To decompress more than one file, use a comma-separated list. - **Output Prefix**: Enter a prefix to add to the output file name. ## Templates and examples **Talk to your SQLite database with a LangChain AI Agent 🧠💬** by Yulia [View template details](https://n8n.io/workflows/2292-talk-to-your-sqlite-database-with-a-langchain-ai-agent/) **Transcribing Bank Statements To Markdown Using Gemini Vision AI** by Jimleuk [View template details](https://n8n.io/workflows/2421-transcribing-bank-statements-to-markdown-using-gemini-vision-ai/) **Build a Tax Code Assistant with Qdrant, Mistral.ai and OpenAI** by Jimleuk [View template details](https://n8n.io/workflows/2341-build-a-tax-code-assistant-with-qdrant-mistralai-and-openai/) [Browse Compression integration templates](https://n8n.io/integrations/compression/), or [search all templates](https://n8n.io/workflows/) # Convert to File Use the Convert to File node to take input data and output it as a file. This converts the input JSON data into a binary format. Extract From File To extract data from a file and convert it to JSON, use the [Extract from File](../n8n-nodes-base.extractfromfile/) node. ## Operations - [**Convert to CSV**](#convert-to-csv) - [**Convert to HTML**](#convert-to-html) - [**Convert to ICS**](#convert-to-ics) - [**Convert to JSON**](#convert-to-json) - [**Convert to ODS**](#convert-to-ods) - [**Convert to RTF**](#convert-to-rtf) - [**Convert to Text File**](#convert-to-text-file) - [**Convert to XLS**](#convert-to-xls) - [**Convert to XLSX**](#convert-to-xlsx) - [**Move Base64 String to File**](#move-base64-string-to-file) Node parameters and options depend on the operation you select. ### Convert to CSV Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to CSV options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - If the first row of the file contains header names, turn on the **Header Row** option. ### Convert to HTML Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to HTML options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - If the first row of the file contains header names, turn on the **Header Row** option. ### Convert to ICS - **Put Output File in Field**. Enter the name of the field in the output data to contain the file. - **Event Title**: Enter the title for the event. - **Start**: Enter the date and time the event will start. All-day events ignore the time. - **End**: Enter the date and time the event will end. All-day events ignore the time. If unset, the node uses the start date. - **All Day**: Select whether the event is an all day event (turned on) or not (turned off). #### Convert to ICS options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Attendees**: Use this option to add attendees to the event. For each attendee, add: - **Name** - **Email** - **RSVP**: Select whether the attendee needs to confirm attendance (turned on) or doesn't (turned off). - **Busy Status**: Use this option to set the busy status for Microsoft applications like Outlook. Choose from: - **Busy** - **Tentative** - **Calendar Name**: For Apple and Microsoft calendars, enter the [calendar name](https://learn.microsoft.com/en-us/openspecs/exchange_server_protocols/ms-oxcical/1da58449-b97e-46bd-b018-a1ce576f3e6d) for the event. - **Description**: Enter an event description. - **Geolocation**: Enter the **Latitude** and **Longitude** for the event's location. - **Location**: Enter the event's intended venue/location. - **Recurrence Rule**: Enter a rule to define the repeat pattern of the event (RRULE). Generate rules using the [iCalendar.org RRULE Tool](https://icalendar.org/rrule-tool.html). - **Organizer**: Enter the organizer's **Name** and **Email**. - **Sequence**: If you're sending an update for an event with the same universally unique ID (UID), enter the revision sequence number. - **Status**: Set the status of the event. Choose from: - **Confirmed** - **Cancelled** - **Tentative** - **UID**: Enter a universally unique ID (UID) for the event. The UID should be globally unique. The node automatically generates a UID if you don't enter one. - **URL**: Enter a URL associated with the event. - **Use Workflow Timezone**: Whether to use UTC time zone (turned off) or the workflow's timezone (turned on). Set the workflow's timezone in the [Workflow Settings](../../../../workflows/settings/). ### Convert to JSON Choose the best output **Mode** for your needs from these options: - **All Items to One File**: Send all input items to a single file. - **Each Item to Separate File**: Create a file for every input item. #### Convert to JSON options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Format**: Choose whether to format the JSON for easier reading (turned on) or not (turned off). - **Encoding**: Choose the character set to use to encode the data. The default is **utf8**. ### Convert to ODS Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to ODS options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Compression**: Choose whether to compress and reduce the file's output size. - **Header Row**: Turn on if the first row of the file contains header names. - **Sheet Name**: Enter the Sheet Name to create in the spreadsheet. ### Convert to RTF Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to RFT options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - If the first row of the file contains header names, turn on the **Header Row** option. ### Convert to Text File Enter the name of the **Text Input Field** that contains a string to convert to a file. Use dot-notation for deep fields, for example `level1.level2.currentKey`. #### Convert to Text File options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Encoding**: Choose the character set to use to encode the data. The default is **utf8**. ### Convert to XLS Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to XLS options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Header Row**: Turn on if the first row of the file contains header names. - **Sheet Name**: Enter the Sheet Name to create in the spreadsheet. ### Convert to XLSX Configure the node for this operation with the **Put Output File in Field** parameter. Enter the name of the field in the output data to contain the file. #### Convert to XLSX options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **Compression**: Choose whether to compress and reduce the file's output size. - **Header Row**: Turn on if the first row of the file contains header names. - **Sheet Name**: Enter the Sheet Name to create in the spreadsheet. ### Move Base64 String to File Enter the name of the **Base64 Input Field** that contains the Base64 string to convert to a file. Use dot-notation for deep fields, for example `level1.level2.currentKey`. #### Move Base64 String to File options You can also configure this operation with these **Options**: - **File Name**: Enter the file name for the generated output file. - **MIME Type**: Enter the MIME type of the output file. Refer to [Common MIME types](https://developer.mozilla.org/en-US/docs/Web/HTTP/Basics_of_HTTP/MIME_types/Common_types) for a list of common MIME types and the file extensions they relate to. ## Templates and examples **Automated Web Scraping: email a CSV, save to Google Sheets & Microsoft Excel** by Mihai Farcas [View template details](https://n8n.io/workflows/2275-automated-web-scraping-email-a-csv-save-to-google-sheets-and-microsoft-excel/) **🤖 Telegram Messaging Agent for Text/Audio/Images** by Joseph LePage [View template details](https://n8n.io/workflows/2751-telegram-messaging-agent-for-textaudioimages/) **Ultimate Scraper Workflow for n8n** by Pablo [View template details](https://n8n.io/workflows/2431-ultimate-scraper-workflow-for-n8n/) [Browse Convert to File integration templates](https://n8n.io/integrations/convert-to-file/), or [search all templates](https://n8n.io/workflows/) # Crypto Use the Crypto node to encrypt data in workflows. ## Actions - [**Generate** a random string](#generate-parameters) - [**Hash** a text or file](#hash-parameters) in a specified format - [**Hmac** a text or file](#hmac-parameters) in a specified format - [**Sign** a string](#sign-parameters) using a private key ## Node parameters This node can be used as an AI tool This node can be used to enhance the capabilities of an AI agent. When used in this way, many parameters can be set automatically, or with information directed by AI - find out more in the [AI tool parameters documentation](../../../../advanced-ai/examples/using-the-fromai-function/). Node parameters depend on the action you select. ### Generate parameters - **Property Name**: Enter the name of the property to write the random string to. - **Type**: Select the encoding type to use to generate the string. Choose from: - **ASCII** - **BASE64** - **HEX** - **UUID** ### Hash parameters - **Type**: Select the hash type to use. Choose from: - **MD5** - **SHA256** - **SHA3-256** - **SHA3-384** - **SHA3-512** - **SHA385** - **SHA512** - **Binary File**: Turn this parameter on if the data you want to hash is from a binary file. - **Value**: If you turn off **Binary File**, enter the value you want to hash. - **Binary Property Name**: If you turn on **Binary File**, enter the name of the binary property that contains the data you want to hash. - **Property Name**: Enter the name of the property you want to write the hash to. - **Encoding**: Select the encoding type to use. Choose from: - **BASE64** - **HEX** ### Hmac parameters - **Binary File**: Turn this parameter on if the data you want to encrypt is from a binary file. - **Value**: If you turn off **Binary File**, enter the value you want to encrypt. - **Binary Property Name**: If you turn on **Binary File**, enter the name of the binary property that contains the data you want to encrypt. - **Type**: Select the encryption type to use. Choose from: - **MD5** - **SHA256** - **SHA3-256** - **SHA3-384** - **SHA3-512** - **SHA385** - **SHA512** - **Property Name**: Enter the name of the property you want to write the hash to. - **Secret**: Enter the secret or secret key used for decoding. - **Encoding**: Select the encoding type to use. Choose from: - **BASE64** - **HEX** ### Sign parameters - **Value**: Enter the value you want to sign. - **Property Name**: Enter the name of the property you want to write the signed value to. - **Algorithm Name or ID**: Choose an algorithm name from the list or specify an ID using an [expression](../../../../data/expressions/). - **Encoding**: Select the encoding type to use. Choose from: - **BASE64** - **HEX** - **Private Key**: Enter a private key to use when signing the string. ## Templates and examples **Conversational Interviews with AI Agents and n8n Forms** by Jimleuk [View template details](https://n8n.io/workflows/2566-conversational-interviews-with-ai-agents-and-n8n-forms/) **Analyze Crypto Markets with the AI-Powered CoinMarketCap Data Analyst** by Don Jayamaha Jr [View template details](https://n8n.io/workflows/3425-analyze-crypto-markets-with-the-ai-powered-coinmarketcap-data-analyst/) **Send a ChatGPT email reply and save responses to Google Sheets** by n8n Team [View template details](https://n8n.io/workflows/1898-send-a-chatgpt-email-reply-and-save-responses-to-google-sheets/) [Browse Crypto integration templates](https://n8n.io/integrations/crypto/), or [search all templates](https://n8n.io/workflows/) # Date & Time The Date & Time node manipulates date and time data and convert it to different formats. Timezone settings The node relies on the timezone setting. n8n uses either: 1. The workflow timezone, if set. Refer to [Workflow settings](../../../../workflows/settings/) for more information. 1. The n8n instance timezone, if the workflow timezone isn't set. The default is `America/New York` for self-hosted instances. n8n Cloud tries to detect the instance owner's timezone when they sign up, falling back to GMT as the default. Self-hosted users can change the instance setting using [Environment variables](../../../../hosting/configuration/environment-variables/timezone-localization/). Cloud admins can change the instance timezone in the [Admin dashboard](../../../../manage-cloud/set-cloud-timezone/). Date and time in other nodes You can work with data and time in the Code node, and in expressions in any node. n8n supports Luxon to help work with date and time in JavaScript. Refer to [Date and time with Luxon](../../../../data/specific-data-types/luxon/) for more information. ## Operations - **Add to a Date**: Add a specified amount of time to a date. - **Extract Part of a Date**: Extract part of a date, such as the year, month, or day. - **Format a Date**: Transform a date's format to a new format using preset options or a custom expression. - **Get Current Date**: Get the current date and choose whether to include the current time or not. Useful for triggering other flows and conditional logic. - **Get Time Between Dates**: Calculate the amount of time in specific units between two dates. - **Round a Date**: Round a date up or down to the nearest unit of your choice, such as month, day, or hour. - **Subtract From a Date**: Subtract a specified amount of time from a date. Refer to the sections below for parameters and options specific to each operation. ## Add to a Date Configure the node for this operation using these parameters: - **Date to Add To**: Enter the date you want to change. - **Time Unit to Add**: Select the time unit for the **Duration** parameter. - **Duration**: Enter the number of time units to add to the date. - **Output Field Name**: Enter the name of the field to output the new date to. ### Add to a Date options This operation has one option: **Include Input Fields**. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. ## Extract Part of a Date Configure the node for this operation using these parameters: - **Date**: Enter the date you want to round or extract part of. - **Part**: Select the part of the date you want to extract. Choose from: - **Year** - **Month** - **Week** - **Day** - **Hour** - **Minute** - **Second** - **Output Field Name**: Enter the name of the field to output the extracted date part to. ### Extract Part of a Date options This operation has one option: **Include Input Fields**. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. ## Format a Date Configure the node for this operation using these parameters: - **Date**: Enter the date you want to format. - **Format**: Select the format you want to change the date to. Choose from: - **Custom Format**: Enter your own custom format using Luxon's [special tokens](https://moment.github.io/luxon/#/formatting?id=table-of-tokens). Tokens are case-sensitive. - **MM/DD/YYYY**: For `4 September 1986`, this formats the date as `09/04/1986`. - **YYYY/MM/DD**: For `4 September 1986`, this formats the date as `1986/09/04`. - **MMMM DD YYYY**: For `4 September 1986`, this formats the date as `September 04 1986`. - **MM-DD-YYYY**: For `4 September 1986`, this formats the date as `09-04-1986`. - **YYYY-MM-DD**: For `4 September 1986`, this formats the date as `1986-09-04`. - **Output Field Name**: Enter the name of the field to output the formatted date to. ### Format a Date options This operation includes these options: - **Include Input Fields**: If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. - **From Date Format**: If the node isn't recognizing the **Date** format correctly, enter the format for that **Date** here so the node can process it properly. Use Luxon's [special tokens](https://moment.github.io/luxon/#/formatting?id=table-of-tokens) to enter the format. Tokens are case-sensitive - **Use Workflow Timezone**: Whether to use the input's time zone (turned off) or the workflow's timezone (turned on). ## Get Current Date Configure the node for this operation using these parameters: - **Include Current Time**: Choose whether to include the current time (turned on) or to set the time to midnight (turned off). - **Output Field Name**: Enter the name of the field to output the current date to. ### Get Current Date options This operation includes these options: - **Include Input Fields**: If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. - **Timezone**: Set the timezone to use. If left blank, the node uses the n8n instance's timezone. +00:00 timezone Use `GMT` for +00:00 timezone. ## Get Time Between Dates Configure the node for this operation using these parameters: - **Start Date**: Enter the earlier date you want to compare. - **End Date**: Enter the later date you want to compare. - **Units**: Select the units you want to calculate the time between. You can include multiple units. Choose from: - **Year** - **Month** - **Week** - **Day** - **Hour** - **Minute** - **Second** - **Millisecond** - **Output Field Name**: Enter the name of the field to output the calculated time between to. ### Get Time Between Dates options The Get Time Between Dates operation includes the **Include Input Fields** option as well as an **Output as ISO String** option. If you leave this option off, each unit you selected will return its own time difference calculation, for example: ``` timeDifference years : 1 months : 3 days : 13 ``` If you turn on the **Output as ISO String** option, the node formats the output as a single ISO duration string, for example: `P1Y3M13D`. ISO duration format displays a format as `PYMDTHMS`. `` is the number for the unit after it. - P = period (duration). It begins all ISO duration strings. - Y = years - M = months - W = weeks - D = days - T = delineator between dates and times, used to avoid confusion between months and minutes - H = hours - M = minutes - S = seconds Milliseconds don't get their own unit, but instead are decimal seconds. For example, 2.1 milliseconds is `0.0021S`. ## Round a Date Configure the node for this operation using these parameters: - **Date**: Enter the date you'd like to round. - **Mode**: Choose whether to **Round Down** or **Round Up**. - **To Nearest**: Select the unit you'd like to round to. Choose from: - **Year** - **Month** - **Week** - **Day** - **Hour** - **Minute** - **Second** - **Output Field Name**: Enter the name of the field to output the rounded date to. ### Round a Date options This operation has one option: **Include Input Fields**. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. ## Subtract From a Date Configure the node for this operation using these parameters: - **Date to Subtract From**: Enter the date you'd like to subtract from. - **Time Unit to Subtract**: Select the unit for the **Duration** amount you want to subtract. - **Duration**: Enter the amount of the time units you want to subtract from the **Date to Subtract From**. - **Output Field Name**: Enter the name of the field to output the rounded date to. ### Subtract From a Date options This operation has one option: **Include Input Fields**. If you'd like to include all of the input fields in the output, turn this option on. If turned off, only the **Output Field Name** and its contents are output. ## Templates and examples **Working with dates and times** by Jonathan [View template details](https://n8n.io/workflows/1744-working-with-dates-and-times/) **Create an RSS feed based on a website's content** by Tom [View template details](https://n8n.io/workflows/1418-create-an-rss-feed-based-on-a-websites-content/) **Customer Support WhatsApp Bot with Google Docs Knowledge Base and Gemini AI** by Tharwat Mohamed [View template details](https://n8n.io/workflows/4966-customer-support-whatsapp-bot-with-google-docs-knowledge-base-and-gemini-ai/) [Browse Date & Time integration templates](https://n8n.io/integrations/date-and-time/), or [search all templates](https://n8n.io/workflows/) ## Related resources The Date & Time node uses [Luxon](https://moment.github.io/luxon). You can also use Luxon in the [Code](../../../../code/code-node/) node and [expressions](../../../../data/expressions/). Refer to [Date and time with Luxon](../../../../data/specific-data-types/luxon/) for more information. ### Supported date formats n8n supports all date formats [supported by Luxon](https://moment.github.io/luxon/#/formatting?id=table-of-tokens). Tokens are case-sensitive. # Debug Helper Use the Debug Helper node to trigger different error types or generate random datasets to help test n8n workflows. ## Operations Define the operation by selecting the **Category**: - **Do Nothing**: Don't do anything. - [**Throw Error**](#throw-error): Throw an error with the specified type and message. - [**Out Of Memory**](#out-of-memory): Generate a specific memory size to simulate being out of memory. - [**Generate Random Data**](#generate-random-data): Generate some random data in a selected format. ## Node parameters The node parameters depend on the **Category** selected. The **Do Nothing** Category has no other parameters. ### Throw Error - **Error Type**: Select the type of error to throw. Choose from: - **NodeApiError** - **NodeOperationError** - **Error** - **Error Message**: Enter the error message to throw. ### Out Of Memory The Out of Memory Category adds one parameter, the **Memory Size to Generate**. Enter the approximate amount of memory to generate. ### Generate Random Data - **Data Type**: Choose the type of random data you'd like to generate. Options include: - **Address** - **Coordinates** - **Credit Card** - **Email** - **IPv4** - **IPv6** - **MAC** - **Nanoids**: If you select this data type, you'll also need to enter: - **Nanoid Alphabet**: The alphabet the generator will use to generate the nanoids. - **Nanoid Length**: The length of each nanoid. - **URL** - **User Data** - **UUID** - **Version** - **Seed**: If you'd like to generate the data using a specific seed, enter it here. This ensures the data gets generated consistently. If you'd rather use random data generation, leave this field empty. - **Number of Items to Generate**: Enter the number of random items you'd like to generate. - **Output as Single Array**: Whether to generate the data as a single array (turned on) or multiple items (turned off). ## Templates and examples **Build an MCP Server with Google Calendar and Custom Functions** by Solomon [View template details](https://n8n.io/workflows/3514-build-an-mcp-server-with-google-calendar-and-custom-functions/) **Test Webhooks in n8n Without Changing WEBHOOK_URL (PostBin & BambooHR Example)** by Ludwig [View template details](https://n8n.io/workflows/2869-test-webhooks-in-n8n-without-changing-webhookurl-postbin-and-bamboohr-example/) **Extract Domain and verify email syntax on the go** by Zacharia Kimotho [View template details](https://n8n.io/workflows/2239-extract-domain-and-verify-email-syntax-on-the-go/) [Browse Debug Helper integration templates](https://n8n.io/integrations/debughelper/), or [search all templates](https://n8n.io/workflows/) # Edit Image Use the Edit Image node to manipulate and edit images. Dependencies 1. If you aren't running n8n on Docker, you need to install [GraphicsMagick](http://www.graphicsmagick.org/README.html). 1. You need to use a node such as the [Read/Write Files from Disk](../n8n-nodes-base.readwritefile/) node or the [HTTP Request](../n8n-nodes-base.httprequest/) node to pass the image file as a data property to the Edit Image node. ## Operations - Add a **Blur** to the image to reduce sharpness - Add a **Border** to the image - **Composite** an image on top of another image - **Create** a new image - **Crop** the image - **Draw** on an image - **Get Information** about the image - **Multi Step** perform multiple operations on the image - **Resize**: Change the size of the image - **Rotate** the image - **Shear** image along the X or Y axis - Add **Text** to the image - Make a color in image **Transparent** ## Node parameters The parameters for this node depend on the operation you select. ### Blur parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Blur**: Enter a number to set how strong the blur should be, between 0 and 1000. Higher numbers create blurrier images. - **Sigma**: Enter a number to set the stigma for the blur, between 0 and 1000. Higher numbers create blurrier images. Refer to [Node options](#node-options) for optional configuration options. ### Border parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Border Width**: Enter the width of the border. - **Border Height**: Enter the height of the border. - **Border Color**: Set the color for the border. You can either enter a hex or select the color swatch to open a color picker. Refer to [Node options](#node-options) for optional configuration options. ### Composite parameters - **Property Name**: Enter the name of the binary property that stores the image data. This image is your base image. - **Composite Image Property**: Enter the name of the binary property that stores image to composite on top of the **Property Name** image. - **Operator**: Select composite operator, which determines how the composite works. Options include: - **Add** - **Atop** - **Bumpmap** - **Copy** - **Copy Black** - **Copy Blue** - **Copy Cyan** - **Copy Green** - **Copy Magenta** - **Copy Opacity** - **Copy Red** - **Copy Yellow** - **Difference** - **Divide** - **In** - **Minus** - **Multiply** - **Out** - **Over** - **Plus** - **Subtract** - **Xor** - **Position X**: Enter the x axis position (horizontal) of the composite image. - **Position Y**: Enter the y axis position (vertical) of the composite image. Refer to [Node options](#node-options) for optional configuration options. ### Create parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Background Color**: Set the background color for the image. You can either enter a hex or select the color swatch to open a color picker. - **Image Width**: Enter the width of the image. - **Image Height**: Enter the height of the image. Refer to [Node options](#node-options) for optional configuration options. ### Crop parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Width**: Enter the width you'd like to crop to. - **Height**: Enter the height you'd like to crop to. - **Position X**: Enter the x axis position (horizontal) to start the crop from. - **Position Y**: Enter the y axis position (vertical) to start the crop from. Refer to [Node options](#node-options) for optional configuration options. ### Draw parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Primitive**: Select the primitive shape to draw. Choose from: - **Circle** - **Line** - **Rectangle** - **Color**: Set the color for the primitive. You can either enter a hex or select the color swatch to open a color picker. - **Start Position X**: Enter the x axis position (horizontal) to start drawing from. - **Start Position Y**: Enter the y axis position (vertical) to start drawing from. - **End Position X**: Enter the x axis position (horizontal) to stop drawing at. - **End Position Y**: Enter the y axis position (vertical) to start drawing at. - **Corner Radius**: Enter a number to set the corner radius. Adding a corner radius will round the corners of the drawn primitive. Refer to [Node options](#node-options) for optional configuration options. ### Get Information parameters For this operation, you only need to add the **Property Name** of the binary property that stores the image data. Refer to [Node options](#node-options) for optional configuration options. ### Multi Step parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Operations**: Add the operations you want the multi step operation to perform. You can use any of the other operations. Refer to [Node options](#node-options) for optional configuration options. ### Resize parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Width**: Enter the new width you'd like for the image. - **Height**: Enter the new height you'd like for the image. - **Option**: Select how you'd like to resize the image. Choose from: - **Ignore Aspect Ratio**: Ignore the aspect ratio and resize to the exact height and width you've entered. - **Maximum Area**: The height and width you've entered is the maximum area/size for the image. The image maintains its aspect ratio and won't be larger than the height and/or width you've entered. - **Minimum Area**: The height and width you've entered is the minimum area/size for the image. The image maintains its aspect ratio and won't be smaller than the height and/or width you've entered. - **Only if Larger**: Resize the image only if it's larger than the width and height you entered. The image maintains its aspect ratio. - **Only if Smaller**: Resize the image only if it's smaller than the width and height you entered. The image maintains its aspect ratio. - **Percent**: Resize the image using the width and height as percentages of the original image. Refer to [Node options](#node-options) for optional configuration options. ### Rotate parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Rotate**: Enter the number of degrees to rotate the image, from --360 to 360. - **Background Color**: Set the background color for the image. You can either enter a hex or select the color swatch to open a color picker. This color is used to fill in the empty background whenever the image is rotated by multiples of 90 degrees. If multipled of 90 degrees are used for the **Rotate** field, the background color isn't used. Refer to [Node options](#node-options) for optional configuration options. ### Shear parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Degrees X**: Enter the number of degrees to shear from the x axis. - **Degrees Y**: Enter the number of degrees to shear from the y axis. Refer to [Node options](#node-options) for optional configuration options. ### Text parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Text**: Enter the text you'd like to write on the image. - **Font Size**: Select the font size for the text. - **Font Color**: Set the font color. You can either enter a hex or select the color swatch to open a color picker. - **Position X**: Enter the x axis position (horizontal) to begin the text at. - **Position Y**: Enter the y axis position (vertical) to begin the text at. - **Max Line Length**: Enter the maximum amount of characters in a line before adding a line break. Refer to [Node options](#node-options) for optional configuration options. ### Transparent parameters - **Property Name**: Enter the name of the binary property that stores the image data. - **Color**: Set the color to make transparent. You can either enter a hex or select the color swatch to open a color picker. Refer to [Node options](#node-options) for optional configuration options. ## Node options - **File Name**: Enter the filename of the output file. - **Format**: Enter the image format of the output file. Choose from: - **bmp** - **gif** - **jpeg** - **png** - **tiff** - **WebP** The **Text** operation also includes the option for **Font Name or ID**. Select the text font from the dropdown or specify an ID using an [expression](../../../../data/expressions/). ## Templates and examples **Flux AI Image Generator** by Max Tkacz [View template details](https://n8n.io/workflows/2417-flux-ai-image-generator/) **Generate Instagram Content from Top Trends with AI Image Generation** by mustafa kendigüzel [View template details](https://n8n.io/workflows/2803-generate-instagram-content-from-top-trends-with-ai-image-generation/) **AI-Powered WhatsApp Chatbot 🤖📲 for Text, Voice, Images & PDFs with memory 🧠** by Davide Boizza [View template details](https://n8n.io/workflows/3586-ai-powered-whatsapp-chatbot-for-text-voice-images-and-pdfs-with-memory/) [Browse Edit Image integration templates](https://n8n.io/integrations/edit-image/), or [search all templates](https://n8n.io/workflows/) # Email Trigger (IMAP) node Use the IMAP Email node to receive emails using an IMAP email server. This node is a trigger node. Credential You can find authentication information for this node [here](../../credentials/imap/). ## Operations - Receive an email ## Node parameters Configure the node using the following parameters. ### Credential to connect with Select or create an [IMAP credential](../../credentials/imap/) to connect to the server with. ### Mailbox Name Enter the mailbox from which you want to receive emails. ### Action Choose whether you want an email marked as read when n8n receives it. **None** will leave it marked unread. **Mark as Read** will mark it as read. ### Download Attachments This toggle controls whether to download email attachments (turned on) or not (turned off). Only set this if necessary, since it increases processing. ### Format Choose the format to return the message in from these options: - **RAW**: This format returns the full email message data with body content in the raw field as a base64url encoded string. It doesn't use the payload field. - **Resolved**: This format returns the full email with all data resolved and attachments saved as binary data. - **Simple**: This format returns the full email. Don't use it if you want to gather inline attachments. ## Node options You can further configure the node using these **Options**. ### Custom Email Rules Enter custom email fetching rules to determine which emails the node fetches. Refer to [node-imap's search function criteria](https://github.com/mscdex/node-imap) for more information. ### Force Reconnect Every Minutes Set an interval in minutes to force reconnection. ## Templates and examples **Effortless Email Management with AI-Powered Summarization & Review** by Davide Boizza [View template details](https://n8n.io/workflows/2862-effortless-email-management-with-ai-powered-summarization-and-review/) **AI Email Analyzer: Process PDFs, Images & Save to Google Drive + Telegram** by Davide Boizza [View template details](https://n8n.io/workflows/3169-ai-email-analyzer-process-pdfs-images-and-save-to-google-drive-telegram/) **A Very Simple "Human in the Loop" Email Response System Using AI and IMAP** by Davide Boizza [View template details](https://n8n.io/workflows/2907-a-very-simple-human-in-the-loop-email-response-system-using-ai-and-imap/) [Browse Email Trigger (IMAP) integration templates](https://n8n.io/integrations/email-trigger-imap/), or [search all templates](https://n8n.io/workflows/) # Error Trigger node You can use the Error Trigger node to create error workflows. When another linked workflow fails, this node gets details about the failed workflow and the errors, and runs the error workflow. ## Usage 1. Create a new workflow, with the Error Trigger as the first node. 1. Give the workflow a name, for example `Error Handler`. 1. Select **Save**. 1. In the workflow where you want to use this error workflow: 1. Select **Options** > **Settings**. 1. In **Error workflow**, select the workflow you just created. For example, if you used the name Error Handler, select **Error handler**. 1. Select **Save**. Now, when this workflow errors, the related error workflow runs. Note the following: - If a workflow uses the Error Trigger node, you don't have to publish the workflow. - If a workflow contains the Error Trigger node, by default, the workflow uses itself as the error workflow. - You can't test error workflows when running workflows manually. The Error Trigger only runs when an automatic workflow errors. ## Templates and examples [Browse Error Trigger integration templates](https://n8n.io/integrations/error-trigger/), or [search all templates](https://n8n.io/workflows/) ## Related resources You can use the [Stop And Error](../n8n-nodes-base.stopanderror/) node to send custom messages to the Error Trigger. Read more about [Error workflows](../../../../flow-logic/error-handling/) in n8n workflows. ## Error data The default error data received by the Error Trigger is: ``` [ { "execution": { "id": "231", "url": "https://n8n.example.com/execution/231", "retryOf": "34", "error": { "message": "Example Error Message", "stack": "Stacktrace" }, "lastNodeExecuted": "Node With Error", "mode": "manual" }, "workflow": { "id": "1", "name": "Example Workflow" } } ] ``` All information is always present, except: - `execution.id`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute. - `execution.url`: requires the execution to be saved in the database. Not present if the error is in the trigger node of the main workflow, as the workflow doesn't execute. - `execution.retryOf`: only present when the execution is a retry of a failed execution. If the error is caused by the trigger node of the main workflow, rather than a later stage, the data sent to the error workflow is different. There's less information in `execution{}` and more in `trigger{}`: ``` { "trigger": { "error": { "context": {}, "name": "WorkflowActivationError", "cause": { "message": "", "stack": "" }, "timestamp": 1654609328787, "message": "", "node": { . . . } }, "mode": "trigger" }, "workflow": { "id": "", "name": "" } } ``` # Evaluation node The Evaluation node performs various operations related to [evaluations](../../../../advanced-ai/evaluations/overview/) to validate your AI workflow reliability. Use the Evaluation node in these scenarios: - To conditionally execute logic based on whether the workflow is under evaluation - To write evaluation outcomes back to a Google Sheet datasetor - To log scoring metrics for your evaluation performance to n8n's evaluations tab Credentials for Google Sheets The Evaluation node's **Set Outputs** operation records evaluation results to data tables or Google Sheets. To use Google Sheets as a recording location, configure a [Google Sheets credential](../../credentials/google/). ## Operations The Evaluation node offers the following operations: - [**Set Outputs**](#set-outputs): Write the results of an evaluation back to a data table or Google Sheet dataset. - [**Set Metrics**](#set-metrics): Record metrics scoring the evaluation performance to n8n's **Evaluations** tab. - [**Check If Evaluating**](#check-if-evaluating): Branches the workflow execution logic depending on whether the current execution is an evaluation. The parameters and options available depend on the operation you select. ### Set Outputs The **Set Outputs** operation has the following parameters: - **Source:** Select the location to which you want to output the evaluation results. Default value is **Data table**. Source settings differ depending on **Source** selection. ``` * When **Source** is **Data table**: * **Data table:** Select a data table by name or ID * When **Source** is **Google Sheets**: * **Credential to connect with**: Create or select an existing [Google Sheets credentials](/integrations/builtin/credentials/google/index.md). * **Document Containing Dataset**: Choose the spreadsheet document you want to write the evaluation results to. Usually this is the same document you select in the [Evaluation Trigger](/integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger.md) node. * Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. * You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. * **Sheet Containing Dataset**: Choose the sheet you want to write the evaluation results to. Usually this is the same sheet you select in the [Evaluation Trigger](/integrations/builtin/core-nodes/n8n-nodes-base.evaluationtrigger.md) node. * Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. * You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. ``` You define the items to write to the data table or Google Sheet in the **Outputs** section. For each output, you set the following: - **Name**: The Google Sheet column name to write the evaluation results to. - **Value**: The value to write to the Google Sheet. ### Set Metrics The **Set Metrics** operation includes a **Metrics to Return** section where you define the metrics to record and track for your evaluations. You can see the metric results in your workflow's **Evaluations** tab. For each metric you wish to record, you set the following details: - **Name**: The name to use for the metric. - **Value**: The numeric value to record. Once you run your evaluation, you can drag and drop values from previous nodes here. Metric values must be numeric. ### Check If Evaluating The **Check If Evaluating** operation doesn't have any parameters. This operation provides branching output connectors so that you can conditionally execute logic depending on whether the current execution is an evaluation or not. ## Templates and examples **AI Automated HR Workflow for CV Analysis and Candidate Evaluation** by Davide Boizza [View template details](https://n8n.io/workflows/2860-ai-automated-hr-workflow-for-cv-analysis-and-candidate-evaluation/) **HR Job Posting and Evaluation with AI** by Francis Njenga [View template details](https://n8n.io/workflows/2773-hr-job-posting-and-evaluation-with-ai/) **AI-Powered Candidate Screening and Evaluation Workflow using OpenAI and Airtable** by Billy Christi [View template details](https://n8n.io/workflows/4481-ai-powered-candidate-screening-and-evaluation-workflow-using-openai-and-airtable/) [Browse Evaluation integration templates](https://n8n.io/integrations/evaluation/), or [search all templates](https://n8n.io/workflows/) ## Related resources To learn more about n8n evaluations, check out the [evaluations documentation](../../../../advanced-ai/evaluations/overview/) n8n provides a trigger node for evaluations. You can find the node docs [here](../n8n-nodes-base.evaluationtrigger/). For common questions or issues and suggested solutions, refer to the evaluations [tips and common issues](../../../../advanced-ai/evaluations/tips-and-common-issues/) page. # Evaluation Trigger node Use the Evaluation Trigger node when setting up [evaluations](../../../../advanced-ai/evaluations/overview/) to validate your AI workflow reliability. During evaluation, the Evaluation Trigger node reads your evaluation dataset from Google Sheets, sending the items through the workflow one at a time, in sequence. On this page, you'll find the Evaluation Trigger node parameters and options. Credentials for Google Sheets The Evaluation Trigger node uses data tables or Google Sheets to store the test dataset. To use Google Sheets as a dataset source, configure a [Google Sheets credential](../../credentials/google/). ## Parameters - **Source:** Select the location to which you want to output the evaluation results. Default value is **Data table**. Source settings differ depending on **Source** selection. - When **Source** is **Data table**: - **Data table:** Select a data table by name or ID. - **Limit Rows**: Whether to limit the number of rows in the data table to process. Default state is `off`. - **Max Rows to Process**: When **Limit Rows** is enabled, the maximum number of rows to read and process during the evaluation. Default value is 10. - **Filter Rows:** Whether to filter rows in the data table to process. Default state is `off`. - When **Source** is **Google Sheets**: - **Credential to connect with**: Create or select an existing [Google Sheets credentials](../../credentials/google/). - **Document Containing Dataset**: Choose the spreadsheet document with the sheet containing your test dataset. - Select **From list** to choose the spreadsheet title from the dropdown list, **By URL** to enter the url of the spreadsheet, or **By ID** to enter the `spreadsheetId`. - You can find the `spreadsheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/spreadsheetId/edit#gid=0`. - **Sheet Containing Dataset**: Choose the sheet containing your test dataset. - Select **From list** to choose the sheet title from the dropdown list, **By URL** to enter the url of the sheet, **By ID** to enter the `sheetId`, or **By Name** to enter the sheet title. - You can find the `sheetId` in a Google Sheets URL: `https://docs.google.com/spreadsheets/d/aBC-123_xYz/edit#gid=sheetId`. - **Limit Rows**: Whether to limit the number of rows in the sheet to process. - **Max Rows to Process**: When **Limit Rows** is enabled, the maximum number of rows to read and process during the evaluation. - **Filters:** Filter the evaluation dataset based on column values. - **Column**: Choose a sheet column you want to filter by. Select **From list** to choose the column name from the dropdown list, or **By ID** to specify an ID using an [expression](../../../../data/expressions/). - **Value**: The column value you want to filter by. The evaluation will only process rows with the given value for the selected column. ## Templates and examples **AI Automated HR Workflow for CV Analysis and Candidate Evaluation** by Davide Boizza [View template details](https://n8n.io/workflows/2860-ai-automated-hr-workflow-for-cv-analysis-and-candidate-evaluation/) **HR Job Posting and Evaluation with AI** by Francis Njenga [View template details](https://n8n.io/workflows/2773-hr-job-posting-and-evaluation-with-ai/) **AI-Powered Candidate Screening and Evaluation Workflow using OpenAI and Airtable** by Billy Christi [View template details](https://n8n.io/workflows/4481-ai-powered-candidate-screening-and-evaluation-workflow-using-openai-and-airtable/) [Browse Evaluation Trigger integration templates](https://n8n.io/integrations/evaluation-trigger/), or [search all templates](https://n8n.io/workflows/) ## Related resources To learn more about n8n evaluations, check out the [evaluations documentation](../../../../advanced-ai/evaluations/overview/) n8n provides an app node for evaluations. You can find the node docs [here](../n8n-nodes-base.evaluation/). For common questions or issues and suggested solutions, refer to the evaluations [tips and common issues](../../../../advanced-ai/evaluations/tips-and-common-issues/) page. # Execute Sub-workflow Use the Execute Sub-workflow node to run a different workflow on the host machine that runs n8n. ## Node parameters ### Source Select where the node should get the sub-workflow's information from: - **Database**: Select this option to load the workflow from the database by ID. You must also enter either: - **From list**: Select the workflow from a list of workflows available to your account. - **Workflow ID**: Enter the ID for the workflow. The URL of the workflow contains the ID after `/workflow/`. For example, if the URL of a workflow is `https://my-n8n-acct.app.n8n.cloud/workflow/abCDE1f6gHiJKL7`, the **Workflow ID** is `abCDE1f6gHiJKL7`. - **Local File**: Select this option to load the workflow from a locally saved JSON file. You must also enter: - **Workflow Path**: Enter the path to the local JSON workflow file you want the node to execute. - **Parameter**: Select this option to load the workflow from a parameter. You must also enter: - **Workflow JSON**: Enter the JSON code you want the node to execute. - **URL**: Select this option to load the workflow from a URL. You must also enter: - **Workflow URL**: Enter the URL you want to load the workflow from. ### Workflow Inputs If you select a sub-workflow using the **database** and **From list** options, the sub-workflow's input items will automatically display, ready for you to fill in or map values. You can optionally remove requested input items, in which case the sub-workflow receives `null` as the item's value. You can also enable **Attempt to convert types** to try to automatically convert data to the sub-workflow item's requested type. Input items won't appear if the sub-workflow's Workflow Input Trigger node uses the "Accept all data" input data mode. ### Mode Use this parameter to control the execution mode for the node. Choose from these options: - **Run once with all items**: Pass all input items into a single execution of the node. - **Run once for each item**: Execute the node once for each input item in turn. ## Node options This node includes one option: **Wait for Sub-Workflow Completion**. This lets you control whether the main workflow should wait for the sub-workflow's completion before moving on to the next step (turned on) or whether the main workflow should continue without waiting (turned off). ## Templates and examples **Scrape business emails from Google Maps without the use of any third party APIs** by Akram Kadri [View template details](https://n8n.io/workflows/2567-scrape-business-emails-from-google-maps-without-the-use-of-any-third-party-apis/) **Back Up Your n8n Workflows To Github** by Jonathan [View template details](https://n8n.io/workflows/1534-back-up-your-n8n-workflows-to-github/) **Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3** by Jimleuk [View template details](https://n8n.io/workflows/2878-host-your-own-ai-deep-research-agent-with-n8n-apify-and-openai-o3/) [Browse Execute Sub-workflow integration templates](https://n8n.io/integrations/execute-workflow/), or [search all templates](https://n8n.io/workflows/) ## Set up and use a sub-workflow This section walks through setting up both the parent workflow and sub-workflow. ### Create the sub-workflow 1. Create a new workflow. Create sub-workflows from existing workflows You can optionally create a sub-workflow directly from an existing parent workflow using the [Execute Sub-workflow](./) node. In the node, select the **Database** and **From list** options and select **Create a sub-workflow** in the list. You can also extract selected nodes directly using [Sub-workflow conversion](../../../../workflows/subworkflow-conversion/) in the context menu. 1. **Optional**: configure which workflows can call the sub-workflow: 1. Select the **Options** menu > **Settings**. n8n opens the **Workflow settings** modal. 1. Change the **This workflow can be called by** setting. Refer to [Workflow settings](../../../../workflows/settings/) for more information on configuring your workflows. 1. Add the **Execute Sub-workflow** trigger node (if you are searching under trigger nodes, this is also titled **When Executed by Another Workflow**). 1. Set the **Input data mode** to choose how you will define the sub-workflow's input data: - **Define using fields below**: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The [Execute Sub-workflow node](./) or [Call n8n Workflow Tool node](../../cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) in the calling workflow will automatically pull in the fields defined here. - **Define using JSON example**: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types. - **Accept all data**: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values. 1. Add other nodes as needed to build your sub-workflow functionality. 1. Save the sub-workflow. Sub-workflow mustn't contain errors If there are errors in the sub-workflow, the parent workflow can't trigger it. Load data into sub-workflow before building This requires the ability to [load data from previous executions](../../../../workflows/executions/debug/), which is available on n8n Cloud and registered Community plans. If you want to load data into your sub-workflow to use while building it: 1. Create the sub-workflow and add the **Execute Sub-workflow Trigger**. 1. Set the node's **Input data mode** to **Accept all data** or define the input items using fields or JSON if they're already known. 1. In the sub-workflow [settings](../../../../workflows/settings/), set **Save successful production executions** to **Save**. 1. Skip ahead to setting up the parent workflow, and run it. 1. Follow the steps to [load data from previous executions](../../../../workflows/executions/debug/). 1. Adjust the **Input data mode** to match the input sent by the parent workflow if necessary. You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow. ### Call the sub-workflow 1. Open the workflow where you want to call the sub-workflow. 1. Add the **Execute Sub-workflow** node. 1. In the **Execute Sub-workflow** node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL. Find your workflow ID Your sub-workflow's ID is the alphanumeric string at the end of its URL. 1. Fill in the required input items defined by the sub-workflow. 1. Save your workflow. When your workflow executes, it will send data to the sub-workflow, and run it. You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the **View sub-execution** link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction. ## How data passes between workflows As an example, imagine you have an Execute Sub-workflow node in **Workflow A**. The Execute Sub-workflow node calls another workflow called **Workflow B**: 1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of **Workflow B**. 1. The last node of **Workflow B** sends the data back to the Execute Sub-workflow node in **Workflow A**. # Execute Sub-workflow Trigger node Use this node to start a workflow in response to another workflow. It should be the first node in the workflow. n8n allows you to call workflows from other workflows. This is useful if you want to: - Reuse a workflow: for example, you could have multiple workflows pulling and processing data from different sources, then have all those workflows call a single workflow that generates a report. - Break large workflows into smaller components. ## Usage This node runs in response to a call from the [Execute Sub-workflow](../n8n-nodes-base.executeworkflow/) or [Call n8n Workflow Tool](../../cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) nodes. ### Create the sub-workflow 1. Create a new workflow. Create sub-workflows from existing workflows You can optionally create a sub-workflow directly from an existing parent workflow using the [Execute Sub-workflow](../n8n-nodes-base.executeworkflow/) node. In the node, select the **Database** and **From list** options and select **Create a sub-workflow** in the list. You can also extract selected nodes directly using [Sub-workflow conversion](../../../../workflows/subworkflow-conversion/) in the context menu. 1. **Optional**: configure which workflows can call the sub-workflow: 1. Select the **Options** menu > **Settings**. n8n opens the **Workflow settings** modal. 1. Change the **This workflow can be called by** setting. Refer to [Workflow settings](../../../../workflows/settings/) for more information on configuring your workflows. 1. Add the **Execute Sub-workflow** trigger node (if you are searching under trigger nodes, this is also titled **When Executed by Another Workflow**). 1. Set the **Input data mode** to choose how you will define the sub-workflow's input data: - **Define using fields below**: Choose this mode to define individual input names and data types that the calling workflow needs to provide. The [Execute Sub-workflow node](../n8n-nodes-base.executeworkflow/) or [Call n8n Workflow Tool node](../../cluster-nodes/sub-nodes/n8n-nodes-langchain.toolworkflow/) in the calling workflow will automatically pull in the fields defined here. - **Define using JSON example**: Choose this mode to provide an example JSON object that demonstrates the expected input items and their types. - **Accept all data**: Choose this mode to accept all data unconditionally. The sub-workflow won't define any required input items. This sub-workflow must handle any input inconsistencies or missing values. 1. Add other nodes as needed to build your sub-workflow functionality. 1. Save the sub-workflow. Sub-workflow mustn't contain errors If there are errors in the sub-workflow, the parent workflow can't trigger it. Load data into sub-workflow before building This requires the ability to [load data from previous executions](../../../../workflows/executions/debug/), which is available on n8n Cloud and registered Community plans. If you want to load data into your sub-workflow to use while building it: 1. Create the sub-workflow and add the **Execute Sub-workflow Trigger**. 1. Set the node's **Input data mode** to **Accept all data** or define the input items using fields or JSON if they're already known. 1. In the sub-workflow [settings](../../../../workflows/settings/), set **Save successful production executions** to **Save**. 1. Skip ahead to setting up the parent workflow, and run it. 1. Follow the steps to [load data from previous executions](../../../../workflows/executions/debug/). 1. Adjust the **Input data mode** to match the input sent by the parent workflow if necessary. You can now pin example data in the trigger node, enabling you to work with real data while configuring the rest of the workflow. ### Call the sub-workflow 1. Open the workflow where you want to call the sub-workflow. 1. Add the **Execute Sub-workflow** node. 1. In the **Execute Sub-workflow** node, set the sub-workflow you want to call. You can choose to call the workflow by ID, load a workflow from a local file, add workflow JSON as a parameter in the node, or target a workflow by URL. Find your workflow ID Your sub-workflow's ID is the alphanumeric string at the end of its URL. 1. Fill in the required input items defined by the sub-workflow. 1. Save your workflow. When your workflow executes, it will send data to the sub-workflow, and run it. You can follow the execution flow from the parent workflow to the sub-workflow by opening the Execute Sub-workflow node and selecting the **View sub-execution** link. Likewise, the sub-workflow's execution contains a link back to the parent workflow's execution to navigate in the other direction. ## Templates and examples [Browse Execute Sub-workflow Trigger integration templates](https://n8n.io/integrations/execute-workflow-trigger/), or [search all templates](https://n8n.io/workflows/) ## How data passes between workflows As an example, imagine you have an Execute Sub-workflow node in **Workflow A**. The Execute Sub-workflow node calls another workflow called **Workflow B**: 1. The Execute Sub-workflow node passes the data to the Execute Sub-workflow Trigger node (titled "When executed by another node" in the canvas) of **Workflow B**. 1. The last node of **Workflow B** sends the data back to the Execute Sub-workflow node in **Workflow A**. # Execution Data Use this node to save metadata for workflow executions. You can then search by this data in the **Executions** list. You can retrieve custom execution data during workflow execution using the Code node. Refer to [Custom executions data](../../../../workflows/executions/custom-executions-data/) for more information. Feature availability Custom executions data is available on: - Cloud: Pro, Enterprise - Self-Hosted: Enterprise, registered Community ## Operations - Save Execution Data for Search ## Data to Save Add a **Saved Field** for each key/value pair of metadata you'd like to save. ## Limitations The Execution Data node has the following restrictions when storing execution metadata: - `key`: limited to 50 characters - `value`: limited to 512 characters If either the `key` or `value` exceed the above limitations, n8n truncates to their maximum length and outputs a log entry. ## Templates and examples **Host Your Own AI Deep Research Agent with n8n, Apify and OpenAI o3** by Jimleuk [View template details](https://n8n.io/workflows/2878-host-your-own-ai-deep-research-agent-with-n8n-apify-and-openai-o3/) **API Schema Extractor** by Polina Medvedieva [View template details](https://n8n.io/workflows/2658-api-schema-extractor/) **Realtime Notion Todoist 2-way sync with Redis** by Mario [View template details](https://n8n.io/workflows/2772-realtime-notion-todoist-2-way-sync-with-redis/) [Browse Execution Data integration templates](https://n8n.io/integrations/execution-data/), or [search all templates](https://n8n.io/workflows/) # Extract From File A common pattern in n8n workflows is to receive a file, either from an [HTTP Request node](../n8n-nodes-base.httprequest/) (for files you are fetching from a website), a [Webhook Node](../n8n-nodes-base.webhook/) (for files which are sent to your workflow from elsewhere), or from a local source. Data obtained in this way is often in a binary format, for example a spreadsheet or PDF. The Extract From File node extracts data from a binary format file and converts it to JSON, which can then be easily manipulated by the rest of your workflow. For converting JSON back into a binary file type, please see the [Convert to File](../n8n-nodes-base.converttofile/) node. ## Operations Use the **Operations** drop-down to select the format of the source file to extract data from. - **Extract From CSV**: The "Comma Separated Values" file type is commonly used for tabulated data. - **Extract From HTML**: Extract fields from standard web page HTML format files. - **Extract From JSON**: Extract JSON data from a binary file. - **Extract From ICS**: Extract fields from iCalendar format files. - **Extract From ODS**: Extract fields from ODS spreadsheet files. - **Extract From PDF**: Extract fields from Portable Document Format files. - **Extract From RTF**: Extract fields from Rich Text Format files. - **Extract From Text File**: Extract fields from a standard text file format. - **Extract From XLS**: Extract fields from a Microsoft Excel file (older format). - **Extract From XLSX**: Extract fields from a Microsoft Excel file. - **Move File to Base64 String**: Converts binary data to a text-friendly [base64](https://datatracker.ietf.org/doc/html/rfc4648#section-4) format. ## Example workflow In this example, a Webhook node is used to trigger the workflow. When a CSV file is sent to the webhook address, the file data is output and received by the Extract From File node. [View workflow file](/_workflows/integrations/builtin/core-nodes/n8n-nodes-base.extractfromfile/webhook-example.json) Set to operate as 'Extract from CSV', the node then outputs the data as a series of JSON 'row' objects: ``` { "row": { "0": "apple", "1": "1", "2": "2", "3": "3" } ... ``` Receiving files with a webhook Select the Webhook Node's **Add Options** button and select **Raw body**, then enable that setting to get the node to output the binary file that the subsequent node is expecting. ## Node parameters ### Input Binary Field Enter the name of the field from the node input data that contains the binary file. The default is 'data'. ### Destination Output Field Enter the name of the field in the node output that will contain the extracted data. This parameter is only available for these operations: - Extract From JSON - Extract From ICS - Extract From Text File - Move File to Base64 String ## Templates and examples **Building Your First WhatsApp Chatbot** by Jimleuk [View template details](https://n8n.io/workflows/2465-building-your-first-whatsapp-chatbot/) **Extract text from a PDF file** by amudhan [View template details](https://n8n.io/workflows/585-extract-text-from-a-pdf-file/) **Scrape and store data from multiple website pages** by Miquel Colomer [View template details](https://n8n.io/workflows/1073-scrape-and-store-data-from-multiple-website-pages/) [Browse Extract From File integration templates](https://n8n.io/integrations/extract-from-file/), or [search all templates](https://n8n.io/workflows/) # Filter Filter items based on a condition. If the item meets the condition, the Filter node passes it on to the next node in the Filter node output. If the item doesn't meet the condition, the Filter node omits the item from its output. ## Node parameters Create filter comparison **Conditions** to perform your filter. - Use the data type dropdown to select the data type and comparison operation type for your condition. For example, to filter for dates after a particular date, select **Date & Time > is after**. - The fields and values to enter into the condition change based on the data type and comparison you select. Refer to [Available data type comparisons](#available-data-type-comparisons) for a full list of all comparisons by data type. Select **Add condition** to create more conditions. ### Combining conditions You can choose to keep items: - When they meet all conditions: Create two or more conditions and select **AND** in the dropdown between them. - When they meet any of the conditions: Create two or more conditions and select **OR** in the dropdown between them. You can't create a mix of AND and OR rules. ## Node options - **Ignore Case**: Whether to ignore letter case (turned on) or be case sensitive (turned off). - **Less Strict Type Validation**: Whether you want n8n to attempt to convert value types based on the operator you choose (turned on) or not (turned off). Turn this on when facing a "wrong type:" error in your node. ## Templates and examples **Scrape business emails from Google Maps without the use of any third party APIs** by Akram Kadri [View template details](https://n8n.io/workflows/2567-scrape-business-emails-from-google-maps-without-the-use-of-any-third-party-apis/) **Build Your First AI Data Analyst Chatbot** by Solomon [View template details](https://n8n.io/workflows/3050-build-your-first-ai-data-analyst-chatbot/) **Generate Leads with Google Maps** by Alex Kim [View template details](https://n8n.io/workflows/2605-generate-leads-with-google-maps/) [Browse Filter integration templates](https://n8n.io/integrations/filter/), or [search all templates](https://n8n.io/workflows/) ## Available data type comparisons ### String String data type supports these comparisons: - exists - does not exist - is empty - is not empty - is equal to - is not equal to - contains - does not contain - starts with - does not start with - ends with - does not end with - matches regex - does not match regex ### Number Number data type supports these comparisons: - exists - does not exist - is empty - is not empty - is equal to - is not equal to - is greater than - is less than - is greater than or equal to - is less than or equal to ### Date & Time Date & Time data type supports these comparisons: - exists - does not exist - is empty - is not empty - is equal to - is not equal to - is after - is before - is after or equal to - is before or equal to ### Boolean Boolean data type supports these comparisons: - exists - does not exist - is empty - is not empty - is true - is false - is equal to - is not equal to ### Array Array data type supports these comparisons: - exists - does not exist - is empty - is not empty - contains - does not contain - length equal to - length not equal to - length greater than - length less than - length greater than or equal to - length less than or equal to ### Object Object data type supports these comparisons: - exists - does not exist - is empty - is not empty # n8n Form node Use the n8n Form node to create user-facing forms with multiple steps. You can add other nodes with custom logic between to process user input. You must start the workflow with the [n8n Form Trigger node](../n8n-nodes-base.formtrigger/). [View workflow file](/_workflows/integrations/builtin/core-nodes/n8n-nodes-base.form/mutually-exclusive-branching.json) ## Setting up the node ### Set default selections with query parameters You can set the initial values for fields by using [query parameters](https://en.wikipedia.org/wiki/Query_string#Web_forms) with the initial URL provided by the [n8n Form Trigger node](../n8n-nodes-base.formtrigger/). Every page in the form receives the same query parameters sent to the n8n Form Trigger node URL. Only for production Query parameters are only available when using the form in production mode. n8n won't populate field values from query parameters in testing mode. When using query parameters, [percent-encode](https://en.wikipedia.org/wiki/Percent-encoding) any field names or values that use special characters. This ensures n8n uses the initial values for the given fields. You can use tools like [URL Encode/Decode](https://www.url-encode-decode.com/) to format your query parameters using percent-encoding. As an example, imagine you have a form with the following properties: - Production URL: `https://my-account.n8n.cloud/form/my-form` - Fields: - `name`: `Jane Doe` - `email`: `jane.doe@example.com` With query parameters and percent-encoding, you could use the following URL to set initial field values to the data above: ``` https://my-account.n8n.cloud/form/my-form?email=jane.doe%40example.com&name=Jane%20Doe ``` Here, percent-encoding replaces the at-symbol (`@`) with the string `%40` and the space character () with the string `%20`. This will set the initial value for these fields no matter which page of the form they appear on. ### Displaying custom HTML You can display custom HTML on your form by adding a **Custom HTML** field to your form. This provides an **HTML** box where you can insert arbitrary HTML code to display as part of the form page. You can use the HTML field to enrich your form page by including things like links, images, videos, and more. n8n will render the content with the rest of the form fields in the normal document flow. Because custom HTML content is read-only, these fields aren't included in the form output data by default. To include the raw HTML content in the node output, provide a name for the data using the **Element Name** field. The HTML field doesn't support `