Operator, the command center is built. The foundation is solid, the walls are impenetrable, and the gates are guarded by silent, automated sentries. We established this fortress across four intensive phases:
- Part 1: So You Want a Digital Kingdom? We surveyed the land and secured our name.
- Part 2: Hardening the Rig & Going Dark We forged our operational identity and vanished from scanners.
- Part 3: Building the Operations Wing We established a private tunnel and deployed our off-grid console.
- Part 4: Fortifying the Gates with Nginx & Fail2ban We installed the master gatekeeper and its automated guard.
The fortress is complete, but it is an empty castle. It’s time to install the engine.
In this guide, we will deploy our first operational service: n8n (pronounced “n-eight-n”). Think of n8n
as the digital nervous system for your command center—a powerful workflow automation engine that can connect different applications, APIs, and services. Down the road, we’ll even use it to automate the monitoring of our own server’s log files.
While this step is entirely optional, it serves as the perfect blueprint for deploying any containerized service. If you have another application in mind, you can adapt these principles to launch it. At this point, your server is a secure launchpad, ready for any mission you assign.
Phase 1: Mission Prep – Staging the Territories
Every operation needs a secure place for its data and equipment. We will not store anything inside the temporary containers; instead, we’ll create permanent directories on the host system. This ensures our data persists even if a service is restarted or updated.
To keep our command center organized for future expansion, we’ll create a logical directory structure. Service data will live under /srv/services
, and our deployment blueprints (Docker Compose files) will live under /srv/compose-files
.
Let’s stake out the territory for n8n
and its required database, PostgreSQL.
# Create directories for persistent service data
sudo mkdir -p /srv/services/n8n
sudo mkdir -p /srv/services/postgres
# Create a directory for our n8n deployment blueprint
sudo mkdir -p /srv/compose-files/n8n
With our staging areas prepared, we can now draft the mission plan.
Phase 2: The Blueprint – Assembling the Docker Compose File
We won’t install software manually. We’ll provide our containerization platform (Docker) with a single, elegant blueprint that defines everything needed for the mission. This is a Docker Compose file. It tells Docker what services to run, what networks to connect them to, and where to store their data.
But first, for these services to communicate securely without exposing ports to each other, we must establish a private, internal communication channel—a dedicated Docker network. This ensures our database and application can talk to each other without ever touching the outside world.
# Create a dedicated, internal network for our containers
sudo docker network create internal-net
Next navigate to the blueprint directory and create the file.
cd /srv/compose-files/n8n
sudo nano docker-compose.yml
Paste the following mission directive into the file. Read the comments carefully—you must replace the placeholder credentials with your own secure, randomly generated secrets.
version: "3.8"
services:
postgres:
image: postgres:15
container_name: postgres
restart: always
environment:
# !!! CHANGE THESE TO YOUR OWN SECURE, RANDOM PASSWORDS !!!
- POSTGRES_USER=n8n_operator
- POSTGRES_PASSWORD=YOUR_VERY_SECURE_POSTGRES_PASSWORD_HERE
- POSTGRES_DB=n8n_db
volumes:
# Mounts the persistent data directory we created
- /srv/services/postgres:/var/lib/postgresql/data
networks:
- internal-net
n8n:
image: n8nio/n8n:latest
container_name: n8n
restart: always
ports:
# CRITICAL: Binds the port ONLY to localhost. Nginx will handle public access.
- "127.0.0.1:5678:5678"
environment:
- DB_TYPE=postgresdb
- DB_POSTGRESDB_HOST=postgres
- DB_POSTGRESDB_PORT=5432
- DB_POSTGRESDB_DATABASE=n8n_db
- DB_POSTGRESDB_USER=n8n_operator
# !!! USE THE SAME PASSWORD YOU SET FOR POSTGRES ABOVE !!!
- DB_POSTGRESDB_PASSWORD=YOUR_VERY_SECURE_POSTGRES_PASSWORD_HERE
# !!! CREATE YOUR OWN SECURE, 32-CHARACTER RANDOM STRING FOR THIS !!!
- N8N_ENCRYPTION_KEY=YOUR_SUPER_SECRET_32_CHAR_ENCRYPTION_KEY
# Set this to the public URL you will use.
# Replace YOURDOMAIN.COM with the domain you bought in Part 1.
- WEBHOOK_URL=https://n8n.YOURDOMAIN.com/
volumes:
# Mounts the persistent data directory for n8n
- /srv/services/n8n:/home/node/.n8n
- /var/log/nginx:/mnt/nginxlogs:ro
- /var/log:/mnt/logs:ro
networks:
- internal-net
depends_on:
- postgres
networks:
internal-net:
external: true
This blueprint defines two services that will communicate over the secure internal-net
we just created. The ports are bound to 127.0.0.1
, making them completely inaccessible from the outside world. Only our gatekeeper, Nginx, will be able to talk to them.
Phase 3: The Public Gateway – Nginx Configuration
With the internal engine designed, we must now instruct our gatekeeper (Nginx) on how to direct traffic to it. We need to create a new server configuration that listens for requests to n8n.YOURDOMAIN.com
and proxies them to the internal n8n
service.
Recall that in Part 1, you registered a domain name and in Part 4, you pointed a wildcard DNS record (*
) to your server’s IP. This is where that pays off.
Create a new Nginx configuration file for your n8n
service.
# Remember to replace 'YOURDOMAIN.COM' with your actual domain
sudo nano /etc/nginx/sites-available/n8n.YOURDOMAIN.COM.conf
Paste in the following configuration. This file contains two parts: a block for HTTP traffic that will be used by Certbot for validation and to redirect users to the secure site, and a skeleton block for HTTPS that Certbot will automatically complete for us.
# Replace n8n.YOURDOMAIN.COM with your actual subdomain and domain
server {
listen 80;
server_name n8n.YOURDOMAIN.COM;
# This location is for Certbot to perform its validation
location /.well-known/acme-challenge/ {
root /var/www/html;
}
# Redirect all other HTTP traffic to HTTPS
location / {
return 301 https://$host$request_uri;
}
}
server {
listen 443 ssl http2;
server_name n8n.YOURDOMAIN.COM;
# Certbot will automatically populate this section with SSL settings.
# Proxy all traffic to the internal n8n service
location / {
proxy_pass http://127.0.0.1:5678;
proxy_set_header Host $host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_http_version 1.1;
}
}
Now, enable this new site configuration and test Nginx to make sure our syntax is correct.
# Enable the site by creating a symlink
sudo ln -s /etc/nginx/sites-available/n8n.YOURDOMAIN.COM.conf /etc/nginx/sites-enabled/
# Test the configuration
sudo nginx -t
# If successful, reload Nginx
sudo systemctl reload nginx
The gateway is now aware of the new route, but the channel is not yet secure.
Phase 4: Execution & Securing the Channel
It’s time. Launch the operation.
Step 1: Launch the Service
From your blueprint directory, give the command to bring the services online.
cd /srv/compose-files/n8n
sudo docker compose up -d
Docker will pull the required images and launch the containers in the background. Your n8n
engine is now running silently within the server.
Step 2: Fortify the Channel with SSL
The final step is to secure the public channel with a valid SSL certificate from Let’s Encrypt. We’ll use the Certbot agent we installed previously.
# Run Certbot, telling it to configure Nginx for your domain
sudo certbot --nginx -d n8n.YOURDOMAIN.COM
Certbot will communicate with Let’s Encrypt, perform the validation using the port 80 block we created, and retrieve your certificate. It will then automatically edit your Nginx configuration file, filling in all the necessary SSL parameters to fully secure your site. It’s a beautiful piece of automation.
Step 3: Mission Accomplished
The work is done. Open your web browser and navigate to your secure domain: n8n.YOURDOMAIN.com
You should be greeted by the n8n
setup screen. You have successfully deployed a fully functional, containerized, and securely proxied application.
You now hold the blueprint for deploying any service you desire. The command center is no longer just a fortress; it’s a factory.
Leave a Reply