3 reasons for using Nginx as proxy

Published on 2016-02-07

Have you every used or experimented with a software proxy? I'll give you a quick rundown of the why and how and after reading this you can compose applications on servers in a whole new way!

There are several reasons for using a proxy and I'll go over them here - the examples I give will be shown using Nginx but they really apply to any proxy - especially software based proxies.

To go over the main proxy feature of Nginx real quick it works so that Nginx receives a request from a client, decides where to proxy it, makes a request to the resolved receiver, and returns the result to the client. One of the the strengths of Nginx is that it does this very very well - it is engineered around handling a lot of connections and data without using all CPU and memory.

1. Load balancing

If the app or site you are hosting gets increased use you might need to have multiple servers actually handling the requests. Depending on your system architecture (hosting environment etc.) you might have access to hardware based proxy or system-specific solutions (Rackspace has load balancers eg.) - these solutions might bring along increased costs which might end up being a significant portion of the total cost.

A load balancer made with Nginx proxying requests to two (or more) different hosts might be a solid alternative. It is possible to run a proxy like this on very little resource like the smallest VPS instances from cloud providers like Digital Ocean, AWS, Rackspace eg..

There are some considerations to be made when splitting up a service into multiple instances and putting a load balancer in front of them. If you do any login or session management using cookies you need to configure your system to take this into consideration. This can be done by letting the instances use a single shared session storage - you need to watch out that this storage doesn't become the new application bottleneck. Another way to deal with it is to have the load balancer direct traffic from the same client to the same host every time. This solution is preferred as it doesn't require any application level changes to work. The way Nginx deals with this is to hash the IP address the request is coming from and then direct sebsequent request from the same origin to the same host.

http {
    upstream exampleapp {
        server instance1.example.com;
        server instance2.example.com;
        server instance3.example.com;
    }

    server {
        listen 80;

        location / {
            proxy_pass http://exampleapp;
        }
    }
}

The above configuration example show a very basic setup of a load balancing setup with Nginx. It is easy to add/remove instances in the upstream block. You don't need to use host names and you can even specify different port numbers for the upstream servers. Please refer to the documentation for all the details - please note at least the use of ip_hash (for locking in requests from the same client to the same upstream server) and least_conn and weight properties for load balancing priority eg.

2. Switching between instances

Have you heard about blue-green deployment? (See original from Martin Fowler here - a guide - and another guide) Does your deployment strategy include setting up a fresh server?

Nginx proxy will also help you here. Imagine this scenario: You have a web application running on a server, with a database, backup etc. - but you are devloping a new version. The new version is using different login systems, database schemas and it is a beast. Maybe it even require some background services, installed packages or even cronjobs running on the server. Bottom line: you are doing some breaking changes! Your first thought should be frustation because you just can't swap in a binary to restart or checkout some code and wait for the user to load up the application. You'll have to take the server offline to avoid data corruption and/or loss eg. and you'll have to build up the server to handle the new application - yes it is just an update - but it's a huge change, remember?

Imagine your application is running on yourwebapp.com. Normally your users type in the URL with or without the www - that doesn't really matter. This means you have demo.yourwebapp.com or test.yourwebapp.com free to run an additional server with the new version of your app! So far this has nothing to do with Nginx really - you can do this with two servers and DNS. The only problem is that you loose some control over switching between instances as it is tied to DNS records which needs to be propagated bound by their TTL.

The solution involving nginx will have you set up a server with nginx on it. All DNS records point to this instance and every request goes through this one server. When Nginx receive a request it looks at the hostname and directs the request to the proper host, either the production or the test instance. When you want to promote the test version to production you simply go to the Nginx configuration and switch the servers/names around and reload the Nginx configuration - boom - you're now running a new version in production. No waiting for DNS and if something should ever go wrong it is very easy to go back - you simply reverse the process and you are running your new version.

http {
    server {
        listen 80;
        server_name *.exampleapp.com exampleapp.com;

        location / {
            proxy_pass http://appserver1;
        }
    }

    server {
        listen 80;
        server_name demo.exampleapp.com;

        location / {
            proxy_pass http://appserver2;
        }
    }
}

The above example achieves the goals. It is very simple to flip around the instances - you just swap the proxy_pass statements between the two location blocks and reload the Nginx configuration.

PLEASE NOTE: There might be many considerations regarding data migration eg. - they are application/implementation specific. Your ability to use this technique depends on your setup.

3. Composing applications

This is absolutely one of my favourite features of putting an Nginx instance in front of my application - making the most awesome Franken-App!

Consider a web application consisting of these three parts:

  1. Backend - all the business logic, workers etc.
  2. Frontend - static HTML+CSS+JS
  3. Websockets - for communicating live between frontend and backend

Frontend aside (it is made whatever way you like - you'll end up with the parts mentioned above - HTML, CSS and Javascript - you have to, it is what the browser understands) what it your preferred stack for writing the backend? Java? Go? .NET? It doesn't really matter - if your answer isn't Node.js it is very likely you're going to have some trouble connecting with the frontend using websockets. The amount of difficulty solely depends on your stack of choice and your willingness/ability to bring in 3rd party code/libraries to achieve your goal.

But why bother? If you can throw together the websockets part in Node.js in very short time and have the code basically mirror the frontend code why wouldn't you? Relax - you don't have to write your entire backend in Node.js! In fact - if you bring in Nginx in front of your application you can construct the server to serve the three parts as one - even though it is actually three very different parts.

The frontend - the static files - can be served by Nginx. This is a good solution as Nginx is very good at serving static files very fast. You can service the frontend on the / base URL.

The backend - the part with your awesome application logic - is started on a TCP port listening on localhost. You can then make Nginx have all requests on the base URL /api/ be redirected to the backend server.

The websocket server - that you/we wrote in Node.js - is also started and listening locally on the server. That way you can make Nginx serve all requests on /websocket/ to this.

http {
    server {
        listen 80;
        server_name exampleapp.com;

        root /path/to/static/frontend;

        location / {
            index index.html;
        }

        location /api {
            # The backend specific port
            proxy_pass http://localhost:4000
        }

        location /websocket {
            # The websocket specific port
            proxy_pass http://localhost:5000
        }
    }
}

Composing applications this way is amazing! It lets you choose the right tool for the job and let you utilize the strengths of multiple stacks and possible the team resources.

Summary

As you've seen in the three examples above there is plenty of reason to at least consider running Nginx as a proxy in front of you web applications. It will give you plenty of possibilities and advantages going forward with development and deployment. Please refer to the full Nginx documentation for all the details on how to configure and set up Nginx.