Nginx node balancers in Linode

Jose I Santa Cruz G
6 min readOct 18, 2023
Load balancing be like…

A good friend of mine required some help configuring the server architecture he needed for an application. The scenario looked quite simple, a web application which connects to a database, and which requires high availability (HA).

He already had most of all configurations ready but load balancing was not working

High availa-what?!

Nick Young meme for high availa-what!?

High availability or HA, for short, most of the times refers to the uptime for an application. If it's a web application, just at the moment you placed it online you'll start receiving hacker attacks, and that's a FACT. Some of those attacks are meant to gain administrative access inside the app or the server, other attacks are meant to bring your application down. But also your app can go down because of bad coding, unhandled errors, server failure, and many other reasons.

If the app goes down, here's where HA comes handy. Properly configuring a HA strategy will ensure your app is up (or available) most of the time, or highly available. For the example I'll be using in this article, if the web server goes down or if the application goes down or starts failing, there should be a fallback so the application keeps responding correctly, and by fallback I'm thinking on a clone/replica/copy of your server AND the app.

I'll intentionally skip security guidelines to bulletproof your server, but if you want a couple of hours of fun here's the official Linode guide for securing your server. By the way, Linode has tons of very well written documentation and tutorial regarding server configuration.

What we want vs what we have

Basic idea of the application's architecture

Simplifying everything, if the main server (Server 1) goes down, the fallback server (Server 2) will answer all requests.

My friend, followed step-by-step ("Oooh baby!"… NKOTB someone?) this tutorial from Akamai (Linode was adquired by Akamai on 2022), replacing Apache by Nginx. He also decided to use only 2 servers, instead of the 3 servers used in the tutorial example.

The problem is that when following every step, and making the required adjustments for Nginx and the 2 servers instead of 3, the classic test of turning off Server 1 to see if Server 2 was responding, nothing happened. Also there was a "Too many redirections" error.

So what could be happening? Is there something missing?

Luke, check the config files!

"Luke, check the config files" meme

"Classic" some may say, but Forrest Gump was wise when saying "Sh*t happens!", we can skip important steps, just don't skip them on production environments.

Configuration for Server 1

server {
if ($host = myserver.tld) {
return 301 https://$host$request_uri;
} # managed by Certbot

listen 80;
listen [::]:80;

server_name myserver.tld www.myserver.tld;
return 301 https://$server_name$request_uri;
}

server {
listen 443 ssl http2;
ssl_certificate /etc/letsencrypt/live/myserver.tld/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/myserver.tld/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf;

ssl_dhparam /etc/ssl/certs/dhparam.pem;
ssl_trusted_certificate /etc/ssl/certs/myserver.tld/trusted.pem;

server_name myserver.tld www.myserver.tld;
index index.html;

access_log /var/log/nginx/myserver.access.log;
error_log /var/log/nginx/myserver.error.log;

root /opt/www/myserver.tld;

location / {
try_files $uri $uri$args $uri$args/ index.html =404;
error_page 404 =200 /404.html;
}

}

Watch for the 2 server blocks, one for serving HTTP on port 80, which redirects (HTTP status 301 permanent redirect) to the second server block, that serves HTTPS on port 443.

The HTTPS block includes all the Let's Encrypt certificate configuration.

Configuration for Server 2

server {
listen 80;
listen [::]:80;

server_name myserver.tld www.myserver.tld;
index index.html;

access_log /var/log/nginx/myserver.access.log;
error_log /var/log/nginx/myserver.error.log;

root /opt/www/myserver.tld;

location / {
try_files $uri $uri$args $uri$args/ index.html =404;
error_page 404 =200 /404.html;
}
}

Far simpler, as it doesn't include any SSL configurations.

Going back to the tutorial:

THE SAME…

Houston we have a problem… Houston? Anybody?

Understanding the problem

These are the kind of thing that are missing in most tutorials and we find out after banging our heads against the keyboard after a while.

The tutorial my friend followed is for Apache, and Apache has certain differences from Nginx, that a first thing to have in mind.

When copying and pasting the server configuration from one side to another we may be replicating the SSL configuration in every place, this means, the node balancer configuration, server 1 AND server 2.

Do we require SSL in all sides? And the answer is NO. Think on the typical Nginx reverse proxy configuration for a NodeJS application (reading this article can help a bit). No need of proxying the api using HTTPS. On point 12 from the tutorial we can see in the image that none of the nodes are pointing up to the 443 port (which stands for HTTPS).

So we'll be using this configuration:

server {
listen 80;
listen [::]:80;

server_name myserver.tld www.myserver.tld;
index index.html;

access_log /var/log/nginx/myserver.access.log;
error_log /var/log/nginx/myserver.error.log;

root /opt/www/myserver.tld;

location / {
try_files $uri $uri$args $uri$args/ index.html =404;
error_page 404 =200 /404.html;
}
}

No SSL is required, the Let's Encrypt certificate is just placed on the node balancer itself.

And as an extra we changed the /opt/www/myserver.tld/index.htmlfiles for both server to a simple text file with "Server 1" and "Server 2", nothing fancy, we just need to see if the node balancing is actually working.

Let's try again:

WTF! (Wow That's Freaky…)

that's the default page for Nginx, where's the index file for each one of our servers?

Node balancing

Checked the configuration file again, no errors… 😓 and Eureka! 😅 let's look for any file with the text "Welcome to nginx!":

man grep for more information 😉

The configuration was found in the default Nginx settings. So why is the node balancer showing the index page for the default domain configuration?

And there's the key: DEFAULT

Going back to the node balancer's configuration, the node balancer knows nothing about domain names, it is pointing directly to the server's IP on port 80, so it will be serving the default configuration. So we copied and pasted our configuration to /etc/nginx/sites-available/default in each server.

Don't forget to enable this configuration making the symbolic link in the sites-enabled folder.

Let's test again.
Test 1, server 1 ON + 2 ON:

$ curl https://my-server.tld
Server 1

Test 1, server 1 ON + 2 OFF:

$ curl https://my-server.tld
Server 1

Test 1, server 1 OFF + 2 ON:

$ curl https://my-server.tld
Server 2

The OFF + OFF test is clearly going to fail 🤷‍♂, so we didn't do that one.

Extras

Modes modes modes… (internal IPs so nevermind…)

The tutorial totally skipped the modes explanation.
Besides the weight for each node to make balancing more ehm… balanced… there's the mode.

Available modes are:

  • Accept: Accept incoming connections
  • Reject: Reject new incoming connections and discontinue health checks on this backend. Existing connections remain active, but session stickiness is not respected. Useful for taking the backend out of rotation to perform maintenance or decommission it.
  • Drain: Only accept connections from clients whose session stickiness points to this backend. Use in advance of taking a backend out of rotation for maintenance or decommissioning to gradually drain connections.
  • Backup: Only accept connections if all other backends are down. Useful if you use frontend caching servers, such as Varnish, and want to direct traffic to the origin servers if the caching servers are down.

And that's a shameless copy & paste from Linodes documentation on Backend Nodes.

So, as the application was meant to provide HA without the node balancing, my friend weighted both servers to 100% and changed the mode for Server 2 to Backup.

Thanks for reading so far. Hope you enjoyed this article and hopefully gained some extra knowledge on where not to fail the way we did when configuring a node balancer in Linode.

This article is NOT sponsored by Linode (really).

If you ask me, you cannot find better servers for self hosting webapps; very accesible prices, excellent support.

--

--

Jose I Santa Cruz G

Polyglot senior software engineer, amateur guitar & bass player, geek, husband and dog father. Not precisely in that order.