N8n self hosting guide

(Note*) Self-hosting requires technical requirements. If you do not have much experience managing servers, you may want to consider using n8n Cloud. However, self-hosting allows for more control and customization, so if you have the necessary skills, it can be beneficial.
If you are thinking about self-hosting, it would be a good idea to watch the video below and make it yourself! Korean video!

Technical Requirements

1.
Server and Container Management : Requires knowledge of server and container setup and configuration.
2.
Resource Management : Application resource management and scaling capabilities are required.
3.
Security : Understanding server and application security is important.
4.
N8n configuration : Requires knowledge of n8n's own setup.

Get started

Installation

Cloning the Repository.
Open terminal, copy the contents below and run it.
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit
Running n8n using Docker Compose
For Nvidia GPU users.
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit docker compose --profile gpu-nvidia up
Mac/Apple Silicon Users
If you have a Mac with an M1 or higher processor, unfortunately you can’t expose your GPU to a Docker instance. In this case, you have two options:
1.
Run the starter kit entirely on your CPU, as in the "For all other users" section below.
2.
For faster inference, run Ollama on your Mac and connect to the n8n instance.
To run Ollama on your Mac, check out the installation instructions on the Ollama homepage and run the starter kit as follows:
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit docker compose up
If you followed the quick start setup below, change your Ollama credentials using http://host.docker.internal:11434/ as the host.
A method for everyone.
git clone https://github.com/n8n-io/self-hosted-ai-starter-kit.git cd self-hosted-ai-starter-kit docker compose --profile cpu up

⚡️ Quick Start and How to Use

The core of the self-hosted AI starter kit is a pre-configured Docker composite file with network and storage settings, minimizing the need for additional installation. After completing the installation steps above, simply follow the steps below to get started.
1.
Open http://localhost:5678/ in your browser to set up n8n.
You only need to do this once.
3.
Select Test Workflow to start running the workflow.
4.
If this is your first time running the workflow, you may need to wait for Ollama to finish downloading Lama3.1.
You can check the progress by examining the Docker console logs.
To open n8n at any time, visit http://localhost:5678/ in your browser.
With n8n instances, you get access to over 400 integrations and basic and advanced AI nodes like AI Agents , Text Classifiers , and Information Extractors . Don’t forget to use Ollama nodes for language models and Qdrant for vector stores to keep everything local.
Memo
This starter kit is designed to help you get started with your own self-hosted AI workflow. While it is not fully optimized for production environments, it combines powerful components that work well for proof-of-concept projects. You can customize it to suit your specific needs.

👓 Recommended Resources

N8n is full of useful content to get you started with AI concepts and nodes quickly. If you run into any issues, head to our support center.

🎥 Video Walkthrough

🛍️ More AI Templates

For more AI workflow ideas, visit the official n8n AI template gallery . Select the Use Workflow button on each workflow to automatically import the workflow to your local n8n instance.

Learning AI Key Concepts

Local AI Templates

Tips and Tricks

Local file access

The self-hosted AI starter kit mounts into the n8n container and creates a shared folder (by default in the same directory) that n8n can access files on disk. This folder inside the n8n container is located at: /data/shared -- This is the path that should be used on nodes that interact with the local file system.

Nodes that interact with the local file system

📜 License

This project is licensed under the Apache License 2.0. See the license file for details.

💬 Support

If possible, join the conversation on the n8n forum :
Share your work : Show off what you've built with n8n and inspire others in the community.
Ask a question :
Whether you're just starting out or a seasoned pro, our community and team are ready to support you through any challenge.
Suggest an idea : Have an idea for a feature or improvement?
Please let me know!
We always want to hear what you want to see next.