This article also appears on NGINX’s blog. Read at NGINX >
Traditionally, web development and hosting were done primarily using the LAMP stack – LAMP being short for Linux (operating system), Apache (web server), MySQL (database), and PHP (programming language), the core components which were then used to run enterprise sites.
As web stacks and load balancers become more agile, and as business needs dictate better performance and stability, it is becoming increasingly common to replace Apache HTTP Server with a lightweight and highly scalable alternative, NGINX. With NGINX, the stack becomes known as LEMP – Linux, (e)NGINX, MySQL, PHP.
Atlantic.Net offers a range of solutions that includes storage, hosting, and managed services. Our VPS Hosting platform has a one‑click LEMP option that has your stack up and running in under 30 seconds. For those who prefer to run NGINX from Docker, we have both Linux and Windows Cloud Servers with Docker. We also provide managed services for those who want or need some assistance with setting up NGINX.
We make frequent and robust use of NGINX not just as a web server, but also as a reverse proxy and load balancer. We decided to use NGINX after researching load‑balancing solutions for our centralized logging cluster. By using NGINX in various capacities, we achieve high availability for both our frontend and backend servers, while maintaining an extremely small footprint – all thanks to NGINX’s efficiencies in RAM and CPU usage.
In terms of which specific functions of NGINX are most useful, some that we find particularly powerful are TCP/UDP proxying and load balancing, access control, and three‑way SSL handshakes. In this blog post, I’ll describe how we use each of these capabilities.
Load Balancing Our Centralized Logging with NGINX Streams
As the need to provide logging for auditing and security has become a greater focus, we at Atlantic.Net were looking for the right proxy and load‑balancing solution – not only for HTTP traffic, but also for syslog. Syslog is commonly sent out in User Datagram Protocol (UDP) format, so we needed something that handled UDP as well as HTTP.
This is where the NGINX stream
modules came into play, as they enable load balancing of UDP traffic. Load balancing is using a number of backend servers for efficient distribution of network traffic. As the term suggests, the purpose is to spread out the workload evenly.
In the following snippet, we are sending syslog messages from out networking infrastructure to our logging backend:
... stream { upstream networking_udp { least_conn; server 198.51.100.100:5910; server 198.51.100.101:5910; server 198.51.100.102:5910; } server { listen 5910 udp; proxy_timeout 0s; proxy_bind $remote_addr transparent; proxy_pass networking_udp; } } ...
Streams also work well for SSL over TCP, as shown in the following example that sends Filebeat logs securely:
... upstream filebeat_tcp { least_conn; server 198.51.100.100:5910; server 198.51.100.101:5910; server 198.51.100.102:5910; } server { listen 5910 ssl; ssl_certificate /etc/nginx/ssl/certs/cert.pem; ssl_certificate_key /etc/nginx/ssl/private/priv-key.pem; ssl_protocols TLSv1.2; ssl_ciphers HIGH:!aNULL:!MD5; ssl_session_cache shared:SSL:20m; ssl_session_timeout 4h; ssl_handshake_timeout 30s; proxy_pass filebeat_tcp; proxy_ssl on; proxy_ssl_certificate /etc/nginx/ssl/certs/cert.pem; proxy_ssl_certificate_key /etc/nginx/ssl/private/priv-key.pem; proxy_ssl_protocols TLSv1.2; proxy_ssl_session_reuse on; proxy_ssl_ciphers HIGH:!aNULL:!MD5; proxy_ssl_trusted_certificate /etc/nginx/ssl/certs/ca.pem; } ...
As you can see, NGINX is capable of proxying and load balancing UDP and Transmission Control Protocol (TCP) traffic.
The use of reverse proxies and load balancing is helpful when you have a service, database, or program that is communicating via UDP or TCP. On a basic level, these methods are relevant when you have the same service, database, or program instance running on multiple upstream servers, as managed by NGINX. At Atlantic.Net, we take advantage of NGINX’s reverse proxy also, because it provides an additional layer of obfuscation to our critical backend services, with very little overhead.
Access Control
Another important step to securing our centralized logging was to prevent the deletion of any data. In addition, access control lists (ACLs) are very useful for limiting traffic based on IP address. For our purposes, we wanted to allow access to log data only from our internal management network.
NGINX gives us a way to delineate very precisely what kinds of HTTP actions can be made and from where, as can be seen here:
... server { listen 9200; client_max_body_size 20M; location / { limit_except GET POST PUT OPTIONS { deny all; } allow 198.51.100.0/24; deny all; proxy_pass http://elasticsearch_backend; } location ~* ^(/_cluster|/_nodes|/_shutdown) { allow 198.51.100.0/24; deny all; proxy_pass http://elasticsearch_backend; } } ...
NGINX also supports a transparent IP feature that enables us to see the originating IP address after the request passes through the proxy. This capability helps with tracking where logs originate from. NGINX makes this task very easy:
... server { listen 5915 udp; proxy_timeout 0s; proxy_bind $remote_addr transparent; proxy_pass vmware_udp; } ...
Three-Way SSL Handshakes
NGINX cleanly handles both sides of the SSL handoffs for our centralized logging. This implementation is very important, as it means both internal and customer servers can communicate securely with NGINX. Each server being logged has its own certificate for two‑way SSL communication, further reducing vulnerabilities. NGINX then transmits the data securely across our internal network, via SSL, to the logging servers. In total, there are three SSL certificates involved for every end‑to‑end communication that supports secure transmission. (See the second configuration snippet in Load Balancing Our Centralized Logging with NGINX Streams for our preferred three‑way SSL setup).
Perspectives and Tests of NGINX
Various individuals and organizations have praised NGINX over the years, and we’ve experienced the same additional benefits from NGINX that they mention.
Software engineer Chris Lea of NodeSource compares Apache to Microsoft Word, noting that both applications have an absurdly large number of features but that typically only a few are necessary. Lea prefers NGINX because it has the features that you need, with none of the bulk and far better performance.
According to Thomas Gieselman of venture capital firm BV Capital, a few of the organizations that they funded fixed issues related to scaling by changing their server to NGINX. Gieselman sees NGINX as making fast growth simpler and accessible to more organizations.
Linux Journal conducted a straightforward test, using the Apache benchmark software, ab
, to compare Apache to NGINX (versions 2.2.8 and 0.5.22, respectively). The programs top
and vmstat
were used to check the performance of the system while the two servers ran simultaneously.
The test showed that NGINX was faster than Apache as a static content server. The two servers each ran optimally, with concurrency set at 100. In order to serve 6500 requests per second, Apache used 17 MB of memory, 30% CPU, and 4 worker processes (in threaded mode). To serve at a much faster clip of 11,500 requests per second, NGINX needed just 1 MB of memory, 15% CPU, and 1 worker.
Bob Ippolito founded the gaming platform Mochi Media, which had 140 million unique users per month at its peak – so he understands the demand for high performance well. Ippolito said in 2006 that he ran a test in which he used NGINX as a reverse proxy for tens of millions of HTTP requests per day (that is, a few hundred per second) on one server.
When the NGINX server was at peak capacity, it consumed approximately 10% CPU and 15 MB of memory on his setup, which was FreeBSD (an open source OS based on UNIX). Using the same parameters, Apache generated 1000 processes and gobbled a massive amount of RAM. Pound created excessive threads and used more than 400 MB for the various thread stacks. Lightly needed more CPU and leaked over 20 MB hourly.
Try Atlantic.Net with NGINX
At Atlantic.Net, we have found similar performance gains with NGINX as described by these various parties. We have also benefited from the specific features described above. If you are currently using Apache or another web server, you may want to give NGINX a try, to see if you get similar improvements that can help you better handle scalability and the ever‑growing need for speed. Test NGINX with a Cloud Server today.