I have set up a web server on a VPS to host several personal websites and a Nextcloud instance. The web server relies on Docker Compose and nginx (I described the implementation in detail in a previous blog post). It thus uses a compose file to define containers, and nginx configuration files to define virtual hosts that manage traffic going to each of the websites. I had created one repo for each website, where I stored the source code for the websites, and one repo for the web server, where I store the compose file and nginx configs.
One thing was bothering me with this setup: there was no clean separation between the web server and websites configurations. The web server repo contained lots of information that was actually specific to each website (nginx configs, bind-mounts to the websites contents). So when working on a website, if I needed to touch its bind-mounts or nginx config, I had to also update the web server repo.
I wanted to improve this by moving as many configuration elements as possible within each website repo and leaving only common configuration bits in the web server repo. To achieve this, I intended to add secondary nginx containers for each website, and have the main nginx container only redirect traffic to the secondary containers.
Implementation
For clarity, I am going to describe a minimal reproducible example where I’m simply adding an nginx layer for one plain static website (imaginatively called mywebsite1).
Of course this approach makes more sense for multi-site setups.
Initial setup
The files structure initially looked something like this:
├── webserver
│ ├── compose.yaml
│ └── nginx
│ └── mywebsite1.conf
└── mywebsite1
└── index.html
The compose file for the main web server looked like this:
services:
nginx:
image: nginx:1.28-alpine
ports:
- 80:80
- 443:443
restart: always
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- /home/myuser/mywebsite1:/usr/share/nginx/mywebsite1:ro
- ./certbot/www/:/var/www/certbot/:ro
- ./certbot/conf/:/etc/nginx/ssl/:ro
certbot:
image: certbot/certbot:v4.1.1
volumes:
- ./certbot/www/:/var/www/certbot/:rw
- ./certbot/conf/:/etc/letsencrypt/:rw
depends_on:
- nginx
And the nginx config looked like this:
server {
listen 80;
listen [::]:80;
server_tokens off;
server_name mywebsite1.com www.mywebsite1.com;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
return 301 https://mywebsite1.com$request_uri;
}
server {
listen 443 ssl;
listen [::]:443 ssl;
http2 on;
server_tokens off;
server_name www.mywebsite1.com;
ssl_certificate /etc/nginx/ssl/live/mywebsite1.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/mywebsite1.com/privkey.pem;
return 301 https://mywebsite1.com$request_uri;
}
server {
listen 443 ssl default_server;
listen [::]:443 ssl default_server;
http2 on;
server_tokens off;
server_name mywebsite1.com;
ssl_certificate /etc/nginx/ssl/live/mywebsite1.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/mywebsite1.com/privkey.pem;
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
location / {
root /usr/share/nginx/mywebsite1;
}
}Adding a website-specific nginx container
To add an nginx container that would serve this specific website, and have the main web server redirect traffic to the new container, I applied the following:
1/ Define a Docker network in the main compose file to allow communication between containers.
services:
nginx:
networks:
webserver-net
...
networks:
webserver-net:
driver: bridge
2/ Remove the bind-mount to the website contents in the main compose file.
3/ In the main nginx file’s final location / block, replace the root directive with a proxy_pass directive, to forward traffic to another address (the new container’s address).
- The
resolverdirective is also required to tell nginx to rely on the Docker DNS resolver to resolve the secondary container address. - I also specified proxy headers to forward the complete request context (IP, port, scheme and such, I copied these directives off of the Nextcloud-AIO nginx config example and won’t described them here).
...
location / {
resolver 127.0.0.11 ipv6=off;
proxy_pass http://mywebsite1_nginx:80$request_uri;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Scheme $scheme;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header Early-Data $ssl_early_data;
}
4/ Add a new compose file for the secondary (website-specific) nginx container.
- The secondary compose file includes the Docker network created in step 1, and the bind-mount to the website contents.
- This new structure allows to use relative paths because the secondary compose file lives right next to the website contents.
services:
nginx:
image: nginx:1.28-alpine
restart: always
container_name: mywebsite1_nginx
volumes:
- ./nginx:/etc/nginx/conf.d:ro
- ./mywebsite1:/usr/share/nginx/mywebsite1:ro
networks:
- webserver-net
networks:
webserver-net:
external: true
name: webserver_webserver-net
5/ Add a website-specific config for the secondary nginx container.
server {
listen 80;
listen [::]:80;
server_tokens off;
server_name mywebsite1.com;
location / {
root /usr/share/nginx/mywebsite1;
}
}
After restarting the main nginx container (with a complete docker compose down/up cycle), and starting the secondary nginx container, the stack is up-and-running with the website now having its own nginx instance.
Centralizing common nginx config bits
You might note that the main web server’s nginx config file is still quite long.
But apart from the directives that specify the domain, it contains only configuration bits that are common to all websites.
So I centralized these common pieces to shorten the config and avoid duplication (remember that this whole approach makes sense for multi-site setups, so in real-world situations there would be several nginx configs in the main web server, deeply creative minds could imagine calling them mywebsite1, mywebsite2, etc).
I moved all directives that didn’t mention the website domain to separate files, and used include to re-use them in each website config.
I couldn’t store these files in the conf.d dir or else nginx would have mistaken them for actual websites configs, so I moved them to a separate includes dir.
The files structure ended up like so:
├── webserver
│ ├── compose.yaml
│ └── nginx
│ ├── conf.d
│ │ ├── mywebsite1.conf
│ │ ├── mywebsite2.conf
│ │ └── ...
│ └── includes
│ ├── certbot
│ ├── docker-resolver
│ ├── listen-443
│ ├── listen-80
│ ├── proxy-settings
│ └── ssl-settings
└── mywebsite1
├── compose.yaml
├── nginx
│ └── mywebsite1.conf
└── mywebsite1
└── index.html
Here is for example the includes/certbot file (I won’t describe all other includes files, as explained above, they contain all directives from the initial config that don’t specify the domain name).
location /.well-known/acme-challenge/ {
root /var/www/certbot;
}
Each website’s config in the main web server now mentions the includes files like so:
server {
include /etc/nginx/includes/listen-80;
include /etc/nginx/includes/certbot;
server_name mywebsite1.com www.mywebsite1.com;
return 301 https://mywebsite1.com$request_uri;
}
server {
include /etc/nginx/includes/listen-443;
include /etc/nginx/includes/ssl-settings;
server_name www.mywebsite1.com;
ssl_certificate /etc/nginx/ssl/live/mywebsite1.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/mywebsite1.com/privkey.pem;
return 301 https://mywebsite1.com$request_uri;
}
server {
include /etc/nginx/includes/listen-443;
include /etc/nginx/includes/ssl-settings;
include /etc/nginx/includes/certbot;
include /etc/nginx/includes/docker-resolver;
include /etc/nginx/includes/proxy-settings;
server_name mywebsite1.com;
ssl_certificate /etc/nginx/ssl/live/mywebsite1.com/fullchain.pem;
ssl_certificate_key /etc/nginx/ssl/live/mywebsite1.com/privkey.pem;
location / {
proxy_pass http://mywebsite1_nginx:80$request_uri;
}
}And the new includes dir needs to be mounted on the main nginx container in the compose file:
services:
nginx:
...
volumes:
- ./nginx/includes:/etc/nginx/includes:ro
- ...
This new structure makes it clearer that the main nginx container is dedicated to common configuration. Each website configuration within this container is less verbose, and contains (almost) only directives that specify the website domain name.
Conclusion
With this new setup, I moved most of the website-specific configuration to the websites repos.
The main web server repo now contains common nginx configuration elements, centralized in includes files.
If I need to work on one of these elements (for example the listen directives, or SSL handling) the changes apply to all websites, so it makes sense to update the main repo.
This main repo still requires configuration files for each website, to redirect traffic to the corresponding secondary nginx container. These files contain directives that specify the website domain name. If I need to add another website to this web server setup, in most cases, the only change required in the web server repo is to copy the boilerplate config and replace the domain name. I could even go as far as excluding these configs from version control (but I haven’t decided to do that).
To be fair, there are cases where updates to the website config still require additional modifications to the main server repo. Some parameters need to be applied all along the reverse-proxy chain. For example, I needed to enable WebSockets for my Nextcloud instance. This wouldn’t have worked if I’d only enabled them in the secondary nginx container, and not in the main nginx container, so I included directives to enable them in both containers. Any tweaking with these WebSockets-related directives thus requires updating both the website-specific and the main web server repo.
In the end, the new setup does add a level of complexity in my web server and websites management. But in my opinion it provides a cleaner separation and makes it easier to maintain. Of course, on the plus side, I learned stuff while toying with this, and I might use the same concept again in another context (containerized architecture, microservices).