SlideShare uma empresa Scribd logo
1 de 34
Dhruba Mandal 1
Dhruba Mandal
Email : dhruvmandal@gmail.com
What is Nginx
NGINX is pronounced as "engine-ex" is an open-source, fast, lightweight and high-
performance web server that can be used to serve static files.
In its initial release, NGINX functioned for HTTP web serving. however Today, it also serves
as a reverse proxy server for HTTP, HTTPS, SMTP, IMAP, POP3 protocols, on the other hand,
it is also used for HTTP load balancer, HTTP cache, and email proxy for IMAP, POP3, and
SMTP.
NGINX improves content and application delivery, improves security, and facilitates
scalability and availability for the busiest websites on the internet.
It has attempt to answer the C10k problem. Where C10k is the challenge of managing (i.e.
10,000 connections at the same time.)
Some high profile companies using Nginx include IBM, Google, Atlassian, Autodesk, GitLab,
DuckDuckGo, T-Mobile, Microsoft, Adobe, Salesforce, VMware, LinkedIn, Cisco, Twitter,
Apple, Intel, Face book, and many more.
Dhruba Mandal 2
Why Nginx
 Nginx Uses single thread and is very fast Moreover it also Provides high level of
concurrency
 Nginx is Lightweight with small memory footprint (i.e. very small memory)
 NGINX provides various services such as reverse proxy, load balancer, and rate limit
network services
(Note : rate limiting is used to control the rate of traffic sent or received by a network
interface controller and is used to prevent DoS attacks.)
Reverse proxying is useful if we have multiple web services listening on various ports and
we need a single public endpoint to reroute requests internally
(i.e. It would allow us to host multiple domain names on port 80 while using a
combination of different NodeJs, Go and java to power separate web services behind the
scenes.)
Nginx can handle the logging, blacklisting, load balancing and serving static files while
the web services focus on what they need to do
Dhruba Mandal 3
Nginx Architecture
NGINX follows the master-slave architecture by supporting event-driven, asynchronous
and non-blocking mode
NGINX architecture is sliced into three different parts. They are
1. master
2. Workers
3. Cache (cache loader & cache manager )Dhruba Mandal
Note : An event-driven means it will respond to actions generated by the user or the
system.
Asynchronous systems do not depend on strict arrival times of signals or messages for
reliable operation
“Non blocking" means the function will never wait for something to happen.
4
1. Master : Master is responsible for reading and validating configuration
( i.e. it is responsible for creating , binding and processing socket and also responsible
for starting terminating and maintaining the configured number of worker process )
It allocate the jobs for the workers as per the request from the client. Once the job
allocated to the workers, the master will look for the next request from the client that’s
it won’t wait for the response from the workers. Once the response comes from workers
master will send the response to the client.
Beside above it also controls nonstop binary updates , reopening of locked files and
also compiles embedded Perl script
2. Workers : Workers are the slaves in the NGINX architecture, will heed to the Master.
Each worker can handle more than 1000 request at a time in a single-threaded manner.
Once the process gets done, the response will be sent to the master.
The single-threaded will saves the RAM and ROM size by working on the same memory
space instead of different memory spaces.
The multi-threaded will work on different memory spaces.
Dhruba Mandal 5
Dhruba Mandal
3. Cache : Nginx cache is used to render the page very fast by getting from cache
memory instead of getting from the server. The pages are getting stored in cache
memory on the first request for the page.
There are two main component of Nginx cache
1. The cache loader
2. The cache manager
The cache Loader :. It loads metadata about previously cached data into the shared
memory zone . Loading the whole cache at once could consume sufficient resources to
slow NGINX performance during the first few minutes after startup.
To avoid this, configure iterative loading of the cache by including the following parameters
to the proxy_cache_path directive:
• loader_threshold – Duration of an iteration, in milliseconds (by default, 200)
• loader_files – Maximum number of items loaded during one iteration (by default, 100)
• loader_sleeps – Delay between iterations, in milliseconds (by default, 50)
The cache manager The cache manager is activated periodically to check the state of the
cache. If the cache size exceeds the limit set by the max_size parameter to the
proxy_cache_path directive, the cache manager removes the data that was accessed least
recently..
6
Features of Nginx
Dhruba Mandal
Easy Installation and maintenance
Improves Performance
Reduce the waiting time for users
Load Balancing
Provides HTTP server capabilities
Designed for maximum performance and stability
Functions for the proxy server
Reverse proxy with caching
FastCGI support with caching
URL rewriting and redirection
Handling of static files, index files, and auto-indexing
On the fly upgrades (Means can be upgraded without taking any down time )
7
Understanding Nginx Configuration
Dhruba Mandal
The core setting of Nginx are mainly configured in nginx.cong . These
configuration files are mainly structured into context
The Main context are
1. The Event Context &
2. The Http Context
Each and every context can have nested context that inherit every thing from their
parents but can also override the setting as needed .
Beside above context there are some important some important parameter in
files are
1. worker _processes
2. worker _connections
3. access.log & error.log &
4. gzip
8
Dhruba Mandal
1. The event Context : The “events” context is contained within the “main” context. It is
used to set global options that affect how Nginx handles connections at a general level.
There can only be a single events context defined within the Nginx configuration.
This context will look like this in the configuration file, outside of any other bracketed
contexts:
# main context
events {
# events context
. . .
}
Nginx uses an event-based connection processing model, so the directives defined within
this context determine how worker processes should handle connections.
Mainly, directives found here are used to either select the connection processing
technique to use, or to modify the way these methods are implemented.
9
Dhruba Mandal
2. The HTTP Context : When configuring Nginx as a web server or reverse proxy, the
“http” context will hold the majority of the configuration. This context will contain all
of the directives and other contexts necessary to define how the program will handle
HTTP or HTTPS connections.
The http context is a sibling of the events context, so they should be listed side-by-side,
rather than nested. They both are children of the main context:
# main context
events {
# events context
. . .
}
http {
# http context
. . .
}
10
Dhruba Mandal
2a. The Server Context : The “server” context is declared within the “http” context.
nested, It is nested bracketed context that allows for multiple declarations
The general format for server context may look something like
# main context
http {
# http context
server {
# first server context
}
server {
# second server context
}
}
The reason for allowing multiple declarations of the server context is that each instance
defines a specific virtual server to handle client requests.
You can have as many server blocks as you need, each of which can handle a specific
subset of connections.
11
Dhruba Mandal
2b. Location contexts : Location contexts share many relational qualities with server
contexts. As server context multiple location contexts can be defined, each location is
used to handle a certain type of client request, and each location is selected by virtue of
matching the location definition against the client request through a selection algorithm.
The general syntax looks like this:
location match_modifier location_match {
. . .
}
Location blocks live within server blocks (or other location blocks) and are used to
decide how to process the request URI (the part of the request that comes after the
domain name or IP address/port).
Beside above context other optional context are like : upstream context ,
split_clients context , map context , geo context , types context , charset_map etc.
12
Dhruba Mandal
3. worker _processes : worker process is the setting that find the number of Nginx
worker process (the default is 1)
In most of cases , running one work processes per CPU core work well . How ever
recommended setting is auto
4. Worker _connections : Worker connection is maximum number of connections
that each worker _processes can handle simultaneously .
By default it is 512 , but many system have enough resources to support large
number (i.e. 1024 , 2048 per work processes or even more )
5. access.log & error.log : This is used to log any error in Nginx and is also used for
debugging purpose
6 . gzip : These are the setting for gzip compression on Nginx response
13
Difference B/W Apache and Nginx
Dhruba Mandal
Apache Nginx
Apache is designed to be a web server Nginx is both a web server and a proxy server.
A single thread can only process one connection A single thread can handle multiple connections.
Apache follows multi-threaded approach to process
client requests
Nginx uses an event-driven approach to serve client
requests.
It handles dynamic content within the web server
itself.
It cannot process dynamic content natively
It cannot process multiple requests concurrently
with heavy web traffic.
It can process multiple client requests concurrently
and efficiently with limited hardware resources
Modules are dynamically loaded or unloaded
making it more flexible.
The modules cannot be loaded dynamically. They
must be compiled within the core software itself
Apache runs on all Unix like systems such as Linux,
BSD, etc. as well as completely supports Windows.
Nginx runs on modern Unix like systems; however it
has limited support for Windows.
14
Installing NGINX on Redhat/CentOS
Dhruba Mandal
Step 1: Use the following command to install the Epel repository.
sudo yum install epel-release
Step 2: Use the following command to update the repository
sudo yum update
Step 3: Install Nginx using the following command:
yum install nginx
Step 4 : Nginx does not start on its own. To get Nginx running, type:
sudo systemctl start nginx
If you are running a firewall, run the following commands to allow HTTP and HTTPS
traffic:
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload
Enable Nginx to start when your system boots with below command
sudo systemctl enable nginx
15
Directories & files in Nginx
Dhruba Mandal
/etc/nginx/ : This directory contains default configuration root for Nginx Server
configuration files with in this instruct nginx how to behave
/etc/nginx/nginx.conf : This is the main configuration file of Nginx here global setting
like worker process , tuning , logging ,loading dynamic modules and references to
other Nginx configuration files are defined.
/etc/nginx/conf.d/ : This directory contains default HTTP server configuration files .
files in this directory ending in .conf are included in the top-level http block with in
/etc/nginx/nginx.conf
In some package repositories, this folder is named sites-enabled, and configuration
files are linked from a folder named site-available
/var/log/nginx/ : The /var/log/nginx/ directory is default log location for Nginx ,
access.log and error.log are found here which is used for debug information if the debug
module is enabled
16
Some Nginx Commands
Dhruba Mandal
Command Description
nginx -h Shows the NGINX help menu.
nginx -v Shows the NGINX version.
nginx -V Shows the NGINX version, build information, and configuration arguments
nginx -t Tests the NGINX configuration.
nginx -T
Tests the NGINX configuration and prints the validated configuration to the
screen
nginx -s signal
The -s flag sends a signal to the NGINX master process. You can send signals
such as stop , quit, reload and reopen
systemctl start nginx Start the nginx process , can also use systemctl restart nginx
systemctl stop nginx Stop nginx Process
systemctl status nginx Show the nginx status either running or failed
17
Serving Static Content in Nginx
Dhruba Mandal
For serving static content Overwrite the default HTTP server configuration located in
/etc/ nginx/conf.d/default.conf with the following NGINX configuration
example:
server {
listen 80 default_server;
server_name www.example.com;
location / {
root /usr/share/nginx/html;
# alias /usr/share/nginx/html;
index index.html index.htm;
}
}
Note : in some cases conf.d directory is not present , in that case create the directory
and change the path in /etc/nginx/nginx.conf . As here my static content is in
/usr/share/nginx/modules/* .conf I will change path to /etc/nginx/confd/* .conf
18
HTTP Load Balancing in Nginx
Dhruba Mandal
We can use Nginx as a load balancer in front of your web application.
Scenario : In our case we will be using load balancer on 2 tomcat with 1 Nginx .
Here we have 2 server both on centos 7 Linux
1. 10.0.205.53 ( Nginx + Tomcat1)
2. 10.0.205.50 ( Tomcat 2 )
19
Types of load balancing in Nginx
Dhruba Mandal
Nginx supports the following three types of load balancing:
1. Round Robin
2. Least – Connected
3. IP – hash
1. Round-robin – This is the default type for Nginx, which uses the typical round-robin
algorithm to decide where to send the incoming request . (i.e request will go one by one
on every server)
2. least-connected – As the name suggests, the incoming request will be sent to the
server that has the less number of connection.
3. ip-hash – This is helpful when you want to have persistence or stick connection of the
incoming request. In this type, the client’s ip-address is used to decide which server the
request should be sent to.
20
Dhruba Mandal
First we have to define the First, upstream:
Specify a unique name (may be name of your application) and list all the servers that
will be load-balanced by this Nginx. In my case I have named it proxy1
upstream proxy1
Here inside /etc/nginx/conf.d I have made file by name loadbalancer.conf in which I have
defined my both server . (file name can be any thing but should have .conf )
1. Round – robin method : – This is the default type for Nginx, which uses the
typical round-robin algorithm to decide where to send the incoming request . (i.e
request will go one by one on every server)
21
Dhruba Mandal
Second, proxy_pass: Specify the unique upstream name that was defined in the
previous step as proxy_pass inside your “location” section, which will be under
“server” section as shown below.
You can define in same file or can make another config file I have made default.conf
In proxypass I have given the complete location of my URL with path
http://10.0.205.53/ideavodapanel if I give only 10.0.205.53 in my browser it will take
whole path 22
Dhruba Mandal
Instead of 2 files you can also define it in one file
vim /etc/nginx/conf.d/default.conf
upstream proxy1 {
server 10.0.205.53:8080;
server 10.0.205.50:8080;
}
server {
listen 80 default_server;
server_name _;
access_log /etc/nginx/logs/nginx_access.log;
error_page 404 /404.html;
location = /404.html {
root /usr/share/nginx/html;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root /usr/share/nginx/html;
}
######################################################
######################################################
location / {
proxy_pass http://proxy1/IdeaVodaPanel/;
#include /etc/nginx/conf.d/proxy_var;
#include /etc/nginx/conf.d/en_compress;
}
}
23
Dhruba Mandal
2. Least – Connected : As the name suggests, the incoming request will be
sent to the server that has the less number of connection.
For this, add the keyword “least_conn” at the top of the upstream as shown
below rest will be same as round robin configuration
If you have several servers listed in the least_conn, and if multiple servers are having the
similar low number of existing active connections, then among those servers, one will be
picked based on weighted round-robin.
Note : - The disadvantage of the round-robin and least-connected method is that
subsequent connection from the client will not go to the same server in the pool .
This may be Ok for a non session dependent application But if your application is
dependent on session, then once an initial connection is established with a particular
server, then you want all future connection from that particular client to go to the same
server. 24
Dhruba Mandal
3. The ip_hash method : . ip-hash – This is helpful when you want to have
persistence or stick connection of the incoming request. In this type, the client’s
ip-address is used to decide which server the request should be sent to.
Specifies that a group should use a load balancing method where requests are
distributed between servers based on client IP addresses.
The first three octets of the client IPv4 address, or the entire IPv6 address, are used as
a hashing key.
The method ensures that requests from the same client will always be passed to the
same server except when this server is unavailable.
upstream proxy1 {
ip_hash;
server 10.0.205.53:8080;
server 10.0.205.50:8080;
}
If one of the servers needs to be temporarily removed, it should be marked with the
down parameter in order to preserve the current hashing of client IP addresses. Fig
(B)
upstream proxy1 {
ip_hash;
server 10.0.205.53:8080;
server 10.0.205.50:8080;down;
}
Fig : A Fig : B 25
Weight Options for the Individual Servers
Dhruba Mandal
You can also specify a weight for a particular server in your pool. By default, all the
servers has the equal priority (weight). i.e. The default value of weight is 1.
But, you can change this behavior by assigning a weight to a server as shown below.
upstream proxy1 {
server 10.0.205.53:8080;
server 10.0.205.50:8080;
server 192.168.101.3:8080 weight 2;
server 192.168.101.4:8080;
server 192.168.101.5:8080;
}
In this example, we have total of 5 servers. But the weight on 3rd server is 2. This
means that for every 6 new request, 2 request will go to 3rd server, and rest of the
server will get 1 request.
you can use weight for least_conn and ip_hash also.
26
Timeout Options for the Individual Servers
(max_fails and fail _timeout)
Dhruba Mandal
We can also specify max_fails and fail timeout to a particular server as shown below
upstream proxy1 {
server 10.0.205.53:8080 max_fails=3
fail_timeout=30s;
server 10.0.205.50:8080;
server 192.168.101.3:8080 weight 2;
server 192.168.101.4:8080;
server 192.168.101.5:8080;
}
The default fail_timeout is 10 seconds. however in example this is set to 30 seconds. This
means that within 30 seconds if there were x number of failed attempts (as defined by
max_fails), then the server will be unavailable. Also, the server will remain unavailable for
30 seconds.
The default max_fails is 1 attempt. this is also set to 3 attempts. This means that after 3
unsuccessful attempts to connect to this particular server, Nginx will consider this server
unavailable for the duration of fail_timeout which is 30 seconds.
27
Backup Server in Nginx Load Balancer Pool
Dhruba Mandal
upstream proxy1 {
server 10.0.205.53:8080 max_fails=3
fail_timeout=30s;
server 10.0.205.50:8080;
server 192.168.101.3:8080 weight 2;
server 192.168.101.4:8080;
server 192.168.101.5:8080 backup;
}
In above example, the 5th server is marked as backup using “backup” keyword
at the end of server parameter.
On this 5th server (192.168.101.5) as a backup server. Incoming request will
not be passed to this server unless all the other 4 servers are down
28
Nginx Reverse Proxy
Dhruba Mandal 29
A reverse proxy is a server that sits between internal applications and external clients,
forwarding client requests to the appropriate server.
Benefit of Nginx Reverse Proxy
Dhruba Mandal 30
Load Balancing - A reverse proxy can perform load balancing which helps distribute
client requests evenly across backend servers.
Increased Security - A reverse proxy also acts as a line of defense for your backend
servers
Better Performance - Nginx has been known to perform better in delivering static
content over Apache. Therefore with an Nginx reverse proxy, all client requests can be
handled by Nginx while all requests for dynamic content can be passed on to the backend
Apache server.
Server cached content : reverse proxies can also be used to serve cached content , hence
making faster data available.
Easy Logging and Auditing - Since there is only one single point of access when a reverse
proxy is implemented, this makes logging and auditing much simpler
Configuration NGINX As Reverse Proxy
Dhruba Mandal 31
1.Install Nginx ( for configuration refer slide no 15 )
2. If you are running a firewall, run the following commands to allow HTTP and HTTPS
traffic:
sudo firewall-cmd --permanent --zone=public --add-service=http
sudo firewall-cmd --permanent --zone=public --add-service=https
sudo firewall-cmd --reload
3. Enable Nginx to start when your system boots with below command
sudo systemctl enable nginx
4. Go to path /etc/nginx/conf.d
a. create a file inside it with any name but .conf for example here I am creating file
with name reverse.conf
Dhruba Mandal 32
5 . Open the file by vim filename and add the below configuration lines
vim reverse.conf
server {
listen 80;
listen [::]:80;
server_name mongod1;# YOUR DOMAIN NAME
location / {
proxy_redirect off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header Host $host;
proxy_pass http://10.0.205.53:9100;# YOUR TOMCAT IP ADDRESS and port
In above mongod1 is my computer domain name and 10.0.205.53 is my IP and
9090 is my port .
Dhruba Mandal 33
6 . In your main nginx config file (i.e. nginx.conf) add the path of your directory where
configuration file is (in our case it is located at /etc/nginx/conf.d/ ) so we have included
/etc/nginx/conf.d/*.conf; expect below shown like comment all other line using #)
user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
tcp_nopush on;
tcp_nodelay on;
keepalive_timeout 65;
types_hash_max_size 2048;
include /etc/nginx/conf.d/*.conf;
default_type application/octet-stream;
}
Now check configuration using nginx –t and restart your nginx by command
systemctl restart nginx
References
https://www.linode.com/docs/web-servers/nginx/nginx-installation-and-basic-setup/
https://www.digitalocean.com/community/tutorials/understanding-the-nginx-
configuration-file-structure-and-configuration-contexts
https://www.youtube.com/watch?v=1ndlRiaYiWQ
https://www.youtube.com/watch?v=1ndlRiaYiWQ
https://www.youtube.com/watch?v=bfd05_vaWJE
https://nginx.org/en/docs/http/ngx_http_upstream_module.html#ip_hash
https://www.thegeekstuff.com/2017/01/nginx-loadbalancer/
Dhruba Mandal 34
Dhruba Mandal
Email :dhruvmandal@gmail.com

Mais conteúdo relacionado

Mais procurados

NGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX, Inc.
 
High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINXNGINX, Inc.
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and TuningNGINX, Inc.
 
Load Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXLoad Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXNGINX, Inc.
 
Nginx internals
Nginx internalsNginx internals
Nginx internalsliqiang xu
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could dosarahnovotny
 
Apache Server Tutorial
Apache Server TutorialApache Server Tutorial
Apache Server TutorialJagat Kothari
 
What is gRPC introduction gRPC Explained
What is gRPC introduction gRPC ExplainedWhat is gRPC introduction gRPC Explained
What is gRPC introduction gRPC Explainedjeetendra mandal
 
NGINX ADC: Basics and Best Practices – EMEA
NGINX ADC: Basics and Best Practices – EMEANGINX ADC: Basics and Best Practices – EMEA
NGINX ADC: Basics and Best Practices – EMEANGINX, Inc.
 
Introduction to Ansible
Introduction to AnsibleIntroduction to Ansible
Introduction to AnsibleKnoldus Inc.
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking OverviewSreenivas Makam
 
Load Balancing with HAproxy
Load Balancing with HAproxyLoad Balancing with HAproxy
Load Balancing with HAproxyBrendan Jennings
 
Ansible presentation
Ansible presentationAnsible presentation
Ansible presentationJohn Lynch
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSNGINX, Inc.
 
Kubernetes in Docker
Kubernetes in DockerKubernetes in Docker
Kubernetes in DockerDocker, Inc.
 

Mais procurados (20)

NGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEANGINX: Basics and Best Practices EMEA
NGINX: Basics and Best Practices EMEA
 
Nginx Essential
Nginx EssentialNginx Essential
Nginx Essential
 
High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINX
 
NGINX Installation and Tuning
NGINX Installation and TuningNGINX Installation and Tuning
NGINX Installation and Tuning
 
Load Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINXLoad Balancing and Scaling with NGINX
Load Balancing and Scaling with NGINX
 
Nginx
NginxNginx
Nginx
 
Nginx internals
Nginx internalsNginx internals
Nginx internals
 
5 things you didn't know nginx could do
5 things you didn't know nginx could do5 things you didn't know nginx could do
5 things you didn't know nginx could do
 
NGINX Plus on AWS
NGINX Plus on AWSNGINX Plus on AWS
NGINX Plus on AWS
 
Apache Server Tutorial
Apache Server TutorialApache Server Tutorial
Apache Server Tutorial
 
HAProxy
HAProxy HAProxy
HAProxy
 
gRPC Overview
gRPC OverviewgRPC Overview
gRPC Overview
 
What is gRPC introduction gRPC Explained
What is gRPC introduction gRPC ExplainedWhat is gRPC introduction gRPC Explained
What is gRPC introduction gRPC Explained
 
NGINX ADC: Basics and Best Practices – EMEA
NGINX ADC: Basics and Best Practices – EMEANGINX ADC: Basics and Best Practices – EMEA
NGINX ADC: Basics and Best Practices – EMEA
 
Introduction to Ansible
Introduction to AnsibleIntroduction to Ansible
Introduction to Ansible
 
Docker Networking Overview
Docker Networking OverviewDocker Networking Overview
Docker Networking Overview
 
Load Balancing with HAproxy
Load Balancing with HAproxyLoad Balancing with HAproxy
Load Balancing with HAproxy
 
Ansible presentation
Ansible presentationAnsible presentation
Ansible presentation
 
Delivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWSDelivering High-Availability Web Services with NGINX Plus on AWS
Delivering High-Availability Web Services with NGINX Plus on AWS
 
Kubernetes in Docker
Kubernetes in DockerKubernetes in Docker
Kubernetes in Docker
 

Semelhante a Nginx: Fast, Lightweight Web Server

Web servers presentacion
Web servers presentacionWeb servers presentacion
Web servers presentacionKiwi Science
 
Clug 2011 March web server optimisation
Clug 2011 March  web server optimisationClug 2011 March  web server optimisation
Clug 2011 March web server optimisationgrooverdan
 
Openstack HA
Openstack HAOpenstack HA
Openstack HAYong Luo
 
High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINXKevin Jones
 
Nginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressNginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressKnoldus Inc.
 
Server Architecture For 1000k Users
Server Architecture For 1000k UsersServer Architecture For 1000k Users
Server Architecture For 1000k UsersAnoop Thakur
 
Scale Apache with Nginx
Scale Apache with NginxScale Apache with Nginx
Scale Apache with NginxBud Siddhisena
 
A Project Report on Linux Server Administration
A Project Report on Linux Server AdministrationA Project Report on Linux Server Administration
A Project Report on Linux Server AdministrationAvinash Kumar
 
Scalable Architecture 101
Scalable Architecture 101Scalable Architecture 101
Scalable Architecture 101Mike Willbanks
 
Caching in Drupal 8
Caching in Drupal 8Caching in Drupal 8
Caching in Drupal 8valuebound
 
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...Ontico
 
Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisationgrooverdan
 
Drupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance SitesDrupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance SitesExove
 
Drupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance SitesDrupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance Sitesdrupalcampest
 
Directory Write Leases in MagFS
Directory Write Leases in MagFSDirectory Write Leases in MagFS
Directory Write Leases in MagFSMaginatics
 
cache concepts and varnish-cache
cache concepts and varnish-cachecache concepts and varnish-cache
cache concepts and varnish-cacheMarc Cortinas Val
 

Semelhante a Nginx: Fast, Lightweight Web Server (20)

Web servers presentacion
Web servers presentacionWeb servers presentacion
Web servers presentacion
 
Clug 2011 March web server optimisation
Clug 2011 March  web server optimisationClug 2011 March  web server optimisation
Clug 2011 March web server optimisation
 
webservers
webserverswebservers
webservers
 
Openstack HA
Openstack HAOpenstack HA
Openstack HA
 
slides (PPT)
slides (PPT)slides (PPT)
slides (PPT)
 
High Availability Content Caching with NGINX
High Availability Content Caching with NGINXHigh Availability Content Caching with NGINX
High Availability Content Caching with NGINX
 
Nginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes IngressNginx Deep Dive Kubernetes Ingress
Nginx Deep Dive Kubernetes Ingress
 
Server Architecture For 1000k Users
Server Architecture For 1000k UsersServer Architecture For 1000k Users
Server Architecture For 1000k Users
 
Scale Apache with Nginx
Scale Apache with NginxScale Apache with Nginx
Scale Apache with Nginx
 
A Project Report on Linux Server Administration
A Project Report on Linux Server AdministrationA Project Report on Linux Server Administration
A Project Report on Linux Server Administration
 
Scalable Architecture 101
Scalable Architecture 101Scalable Architecture 101
Scalable Architecture 101
 
Exploring Node.jS
Exploring Node.jSExploring Node.jS
Exploring Node.jS
 
Caching in Drupal 8
Caching in Drupal 8Caching in Drupal 8
Caching in Drupal 8
 
Scaling PHP apps
Scaling PHP appsScaling PHP apps
Scaling PHP apps
 
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...
Как Web-акселератор акселерирует ваш сайт / Александр Крижановский (Tempesta ...
 
Clug 2012 March web server optimisation
Clug 2012 March   web server optimisationClug 2012 March   web server optimisation
Clug 2012 March web server optimisation
 
Drupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance SitesDrupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance Sites
 
Drupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance SitesDrupalcamp Estonia - High Performance Sites
Drupalcamp Estonia - High Performance Sites
 
Directory Write Leases in MagFS
Directory Write Leases in MagFSDirectory Write Leases in MagFS
Directory Write Leases in MagFS
 
cache concepts and varnish-cache
cache concepts and varnish-cachecache concepts and varnish-cache
cache concepts and varnish-cache
 

Mais de Dhrubaji Mandal ♛

Dessertation project on BPM in IT Industry
Dessertation project  on BPM in IT Industry Dessertation project  on BPM in IT Industry
Dessertation project on BPM in IT Industry Dhrubaji Mandal ♛
 
Business Process Management in IT company
Business Process Management  in IT company Business Process Management  in IT company
Business Process Management in IT company Dhrubaji Mandal ♛
 
SERVICES MANAGEMENT IN HOTEL INDUSTRY
SERVICES  MANAGEMENT IN HOTEL INDUSTRY SERVICES  MANAGEMENT IN HOTEL INDUSTRY
SERVICES MANAGEMENT IN HOTEL INDUSTRY Dhrubaji Mandal ♛
 
Supply chain presentation (Mumbai Dabba wala)
Supply chain presentation (Mumbai Dabba wala)Supply chain presentation (Mumbai Dabba wala)
Supply chain presentation (Mumbai Dabba wala)Dhrubaji Mandal ♛
 
Information technology implementation in power distribution
Information technology implementation in  power distributionInformation technology implementation in  power distribution
Information technology implementation in power distributionDhrubaji Mandal ♛
 
Project report on exploring express cargo
Project report on exploring express cargoProject report on exploring express cargo
Project report on exploring express cargoDhrubaji Mandal ♛
 
Presentation on job specification
Presentation  on  job specificationPresentation  on  job specification
Presentation on job specificationDhrubaji Mandal ♛
 
Project Report On Emotion At Work Place -- Dhrubaji Mandal
Project Report On Emotion At Work Place  -- Dhrubaji Mandal Project Report On Emotion At Work Place  -- Dhrubaji Mandal
Project Report On Emotion At Work Place -- Dhrubaji Mandal Dhrubaji Mandal ♛
 

Mais de Dhrubaji Mandal ♛ (14)

Mongo db dhruba
Mongo db dhrubaMongo db dhruba
Mongo db dhruba
 
Cloud Monitoring tool Grafana
Cloud Monitoring  tool Grafana Cloud Monitoring  tool Grafana
Cloud Monitoring tool Grafana
 
Telecommunication
TelecommunicationTelecommunication
Telecommunication
 
Signaling system 7
Signaling system 7 Signaling system 7
Signaling system 7
 
Dessertation project on BPM in IT Industry
Dessertation project  on BPM in IT Industry Dessertation project  on BPM in IT Industry
Dessertation project on BPM in IT Industry
 
Business Process Management in IT company
Business Process Management  in IT company Business Process Management  in IT company
Business Process Management in IT company
 
SERVICES MANAGEMENT IN HOTEL INDUSTRY
SERVICES  MANAGEMENT IN HOTEL INDUSTRY SERVICES  MANAGEMENT IN HOTEL INDUSTRY
SERVICES MANAGEMENT IN HOTEL INDUSTRY
 
Supply chain presentation (Mumbai Dabba wala)
Supply chain presentation (Mumbai Dabba wala)Supply chain presentation (Mumbai Dabba wala)
Supply chain presentation (Mumbai Dabba wala)
 
Information technology implementation in power distribution
Information technology implementation in  power distributionInformation technology implementation in  power distribution
Information technology implementation in power distribution
 
Project report on exploring express cargo
Project report on exploring express cargoProject report on exploring express cargo
Project report on exploring express cargo
 
Presentation on job specification
Presentation  on  job specificationPresentation  on  job specification
Presentation on job specification
 
Project Report On Emotion At Work Place -- Dhrubaji Mandal
Project Report On Emotion At Work Place  -- Dhrubaji Mandal Project Report On Emotion At Work Place  -- Dhrubaji Mandal
Project Report On Emotion At Work Place -- Dhrubaji Mandal
 
Dividend policy report
Dividend policy reportDividend policy report
Dividend policy report
 
Presentation on Russian culture
Presentation on Russian culturePresentation on Russian culture
Presentation on Russian culture
 

Último

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeThiyagu K
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAssociation for Project Management
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactdawncurless
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introductionMaksud Ahmed
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajanpragatimahajan3
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...fonyou31
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfchloefrazer622
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphThiyagu K
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Sapana Sha
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3JemimahLaneBuaron
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room servicediscovermytutordmt
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxShobhayan Kirtania
 

Último (20)

The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
Measures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and ModeMeasures of Central Tendency: Mean, Median and Mode
Measures of Central Tendency: Mean, Median and Mode
 
APM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across SectorsAPM Welcome, APM North West Network Conference, Synergies Across Sectors
APM Welcome, APM North West Network Conference, Synergies Across Sectors
 
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptxINDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
INDIA QUIZ 2024 RLAC DELHI UNIVERSITY.pptx
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
Accessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impactAccessible design: Minimum effort, maximum impact
Accessible design: Minimum effort, maximum impact
 
microwave assisted reaction. General introduction
microwave assisted reaction. General introductionmicrowave assisted reaction. General introduction
microwave assisted reaction. General introduction
 
social pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajansocial pharmacy d-pharm 1st year by Pragati K. Mahajan
social pharmacy d-pharm 1st year by Pragati K. Mahajan
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
Ecosystem Interactions Class Discussion Presentation in Blue Green Lined Styl...
 
Arihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdfArihant handbook biology for class 11 .pdf
Arihant handbook biology for class 11 .pdf
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
Z Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot GraphZ Score,T Score, Percential Rank and Box Plot Graph
Z Score,T Score, Percential Rank and Box Plot Graph
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111Call Girls in Dwarka Mor Delhi Contact Us 9654467111
Call Girls in Dwarka Mor Delhi Contact Us 9654467111
 
Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3Q4-W6-Restating Informational Text Grade 3
Q4-W6-Restating Informational Text Grade 3
 
9548086042 for call girls in Indira Nagar with room service
9548086042  for call girls in Indira Nagar  with room service9548086042  for call girls in Indira Nagar  with room service
9548086042 for call girls in Indira Nagar with room service
 
The byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptxThe byproduct of sericulture in different industries.pptx
The byproduct of sericulture in different industries.pptx
 

Nginx: Fast, Lightweight Web Server

  • 1. Dhruba Mandal 1 Dhruba Mandal Email : dhruvmandal@gmail.com
  • 2. What is Nginx NGINX is pronounced as "engine-ex" is an open-source, fast, lightweight and high- performance web server that can be used to serve static files. In its initial release, NGINX functioned for HTTP web serving. however Today, it also serves as a reverse proxy server for HTTP, HTTPS, SMTP, IMAP, POP3 protocols, on the other hand, it is also used for HTTP load balancer, HTTP cache, and email proxy for IMAP, POP3, and SMTP. NGINX improves content and application delivery, improves security, and facilitates scalability and availability for the busiest websites on the internet. It has attempt to answer the C10k problem. Where C10k is the challenge of managing (i.e. 10,000 connections at the same time.) Some high profile companies using Nginx include IBM, Google, Atlassian, Autodesk, GitLab, DuckDuckGo, T-Mobile, Microsoft, Adobe, Salesforce, VMware, LinkedIn, Cisco, Twitter, Apple, Intel, Face book, and many more. Dhruba Mandal 2
  • 3. Why Nginx  Nginx Uses single thread and is very fast Moreover it also Provides high level of concurrency  Nginx is Lightweight with small memory footprint (i.e. very small memory)  NGINX provides various services such as reverse proxy, load balancer, and rate limit network services (Note : rate limiting is used to control the rate of traffic sent or received by a network interface controller and is used to prevent DoS attacks.) Reverse proxying is useful if we have multiple web services listening on various ports and we need a single public endpoint to reroute requests internally (i.e. It would allow us to host multiple domain names on port 80 while using a combination of different NodeJs, Go and java to power separate web services behind the scenes.) Nginx can handle the logging, blacklisting, load balancing and serving static files while the web services focus on what they need to do Dhruba Mandal 3
  • 4. Nginx Architecture NGINX follows the master-slave architecture by supporting event-driven, asynchronous and non-blocking mode NGINX architecture is sliced into three different parts. They are 1. master 2. Workers 3. Cache (cache loader & cache manager )Dhruba Mandal Note : An event-driven means it will respond to actions generated by the user or the system. Asynchronous systems do not depend on strict arrival times of signals or messages for reliable operation “Non blocking" means the function will never wait for something to happen. 4
  • 5. 1. Master : Master is responsible for reading and validating configuration ( i.e. it is responsible for creating , binding and processing socket and also responsible for starting terminating and maintaining the configured number of worker process ) It allocate the jobs for the workers as per the request from the client. Once the job allocated to the workers, the master will look for the next request from the client that’s it won’t wait for the response from the workers. Once the response comes from workers master will send the response to the client. Beside above it also controls nonstop binary updates , reopening of locked files and also compiles embedded Perl script 2. Workers : Workers are the slaves in the NGINX architecture, will heed to the Master. Each worker can handle more than 1000 request at a time in a single-threaded manner. Once the process gets done, the response will be sent to the master. The single-threaded will saves the RAM and ROM size by working on the same memory space instead of different memory spaces. The multi-threaded will work on different memory spaces. Dhruba Mandal 5
  • 6. Dhruba Mandal 3. Cache : Nginx cache is used to render the page very fast by getting from cache memory instead of getting from the server. The pages are getting stored in cache memory on the first request for the page. There are two main component of Nginx cache 1. The cache loader 2. The cache manager The cache Loader :. It loads metadata about previously cached data into the shared memory zone . Loading the whole cache at once could consume sufficient resources to slow NGINX performance during the first few minutes after startup. To avoid this, configure iterative loading of the cache by including the following parameters to the proxy_cache_path directive: • loader_threshold – Duration of an iteration, in milliseconds (by default, 200) • loader_files – Maximum number of items loaded during one iteration (by default, 100) • loader_sleeps – Delay between iterations, in milliseconds (by default, 50) The cache manager The cache manager is activated periodically to check the state of the cache. If the cache size exceeds the limit set by the max_size parameter to the proxy_cache_path directive, the cache manager removes the data that was accessed least recently.. 6
  • 7. Features of Nginx Dhruba Mandal Easy Installation and maintenance Improves Performance Reduce the waiting time for users Load Balancing Provides HTTP server capabilities Designed for maximum performance and stability Functions for the proxy server Reverse proxy with caching FastCGI support with caching URL rewriting and redirection Handling of static files, index files, and auto-indexing On the fly upgrades (Means can be upgraded without taking any down time ) 7
  • 8. Understanding Nginx Configuration Dhruba Mandal The core setting of Nginx are mainly configured in nginx.cong . These configuration files are mainly structured into context The Main context are 1. The Event Context & 2. The Http Context Each and every context can have nested context that inherit every thing from their parents but can also override the setting as needed . Beside above context there are some important some important parameter in files are 1. worker _processes 2. worker _connections 3. access.log & error.log & 4. gzip 8
  • 9. Dhruba Mandal 1. The event Context : The “events” context is contained within the “main” context. It is used to set global options that affect how Nginx handles connections at a general level. There can only be a single events context defined within the Nginx configuration. This context will look like this in the configuration file, outside of any other bracketed contexts: # main context events { # events context . . . } Nginx uses an event-based connection processing model, so the directives defined within this context determine how worker processes should handle connections. Mainly, directives found here are used to either select the connection processing technique to use, or to modify the way these methods are implemented. 9
  • 10. Dhruba Mandal 2. The HTTP Context : When configuring Nginx as a web server or reverse proxy, the “http” context will hold the majority of the configuration. This context will contain all of the directives and other contexts necessary to define how the program will handle HTTP or HTTPS connections. The http context is a sibling of the events context, so they should be listed side-by-side, rather than nested. They both are children of the main context: # main context events { # events context . . . } http { # http context . . . } 10
  • 11. Dhruba Mandal 2a. The Server Context : The “server” context is declared within the “http” context. nested, It is nested bracketed context that allows for multiple declarations The general format for server context may look something like # main context http { # http context server { # first server context } server { # second server context } } The reason for allowing multiple declarations of the server context is that each instance defines a specific virtual server to handle client requests. You can have as many server blocks as you need, each of which can handle a specific subset of connections. 11
  • 12. Dhruba Mandal 2b. Location contexts : Location contexts share many relational qualities with server contexts. As server context multiple location contexts can be defined, each location is used to handle a certain type of client request, and each location is selected by virtue of matching the location definition against the client request through a selection algorithm. The general syntax looks like this: location match_modifier location_match { . . . } Location blocks live within server blocks (or other location blocks) and are used to decide how to process the request URI (the part of the request that comes after the domain name or IP address/port). Beside above context other optional context are like : upstream context , split_clients context , map context , geo context , types context , charset_map etc. 12
  • 13. Dhruba Mandal 3. worker _processes : worker process is the setting that find the number of Nginx worker process (the default is 1) In most of cases , running one work processes per CPU core work well . How ever recommended setting is auto 4. Worker _connections : Worker connection is maximum number of connections that each worker _processes can handle simultaneously . By default it is 512 , but many system have enough resources to support large number (i.e. 1024 , 2048 per work processes or even more ) 5. access.log & error.log : This is used to log any error in Nginx and is also used for debugging purpose 6 . gzip : These are the setting for gzip compression on Nginx response 13
  • 14. Difference B/W Apache and Nginx Dhruba Mandal Apache Nginx Apache is designed to be a web server Nginx is both a web server and a proxy server. A single thread can only process one connection A single thread can handle multiple connections. Apache follows multi-threaded approach to process client requests Nginx uses an event-driven approach to serve client requests. It handles dynamic content within the web server itself. It cannot process dynamic content natively It cannot process multiple requests concurrently with heavy web traffic. It can process multiple client requests concurrently and efficiently with limited hardware resources Modules are dynamically loaded or unloaded making it more flexible. The modules cannot be loaded dynamically. They must be compiled within the core software itself Apache runs on all Unix like systems such as Linux, BSD, etc. as well as completely supports Windows. Nginx runs on modern Unix like systems; however it has limited support for Windows. 14
  • 15. Installing NGINX on Redhat/CentOS Dhruba Mandal Step 1: Use the following command to install the Epel repository. sudo yum install epel-release Step 2: Use the following command to update the repository sudo yum update Step 3: Install Nginx using the following command: yum install nginx Step 4 : Nginx does not start on its own. To get Nginx running, type: sudo systemctl start nginx If you are running a firewall, run the following commands to allow HTTP and HTTPS traffic: sudo firewall-cmd --permanent --zone=public --add-service=http sudo firewall-cmd --permanent --zone=public --add-service=https sudo firewall-cmd --reload Enable Nginx to start when your system boots with below command sudo systemctl enable nginx 15
  • 16. Directories & files in Nginx Dhruba Mandal /etc/nginx/ : This directory contains default configuration root for Nginx Server configuration files with in this instruct nginx how to behave /etc/nginx/nginx.conf : This is the main configuration file of Nginx here global setting like worker process , tuning , logging ,loading dynamic modules and references to other Nginx configuration files are defined. /etc/nginx/conf.d/ : This directory contains default HTTP server configuration files . files in this directory ending in .conf are included in the top-level http block with in /etc/nginx/nginx.conf In some package repositories, this folder is named sites-enabled, and configuration files are linked from a folder named site-available /var/log/nginx/ : The /var/log/nginx/ directory is default log location for Nginx , access.log and error.log are found here which is used for debug information if the debug module is enabled 16
  • 17. Some Nginx Commands Dhruba Mandal Command Description nginx -h Shows the NGINX help menu. nginx -v Shows the NGINX version. nginx -V Shows the NGINX version, build information, and configuration arguments nginx -t Tests the NGINX configuration. nginx -T Tests the NGINX configuration and prints the validated configuration to the screen nginx -s signal The -s flag sends a signal to the NGINX master process. You can send signals such as stop , quit, reload and reopen systemctl start nginx Start the nginx process , can also use systemctl restart nginx systemctl stop nginx Stop nginx Process systemctl status nginx Show the nginx status either running or failed 17
  • 18. Serving Static Content in Nginx Dhruba Mandal For serving static content Overwrite the default HTTP server configuration located in /etc/ nginx/conf.d/default.conf with the following NGINX configuration example: server { listen 80 default_server; server_name www.example.com; location / { root /usr/share/nginx/html; # alias /usr/share/nginx/html; index index.html index.htm; } } Note : in some cases conf.d directory is not present , in that case create the directory and change the path in /etc/nginx/nginx.conf . As here my static content is in /usr/share/nginx/modules/* .conf I will change path to /etc/nginx/confd/* .conf 18
  • 19. HTTP Load Balancing in Nginx Dhruba Mandal We can use Nginx as a load balancer in front of your web application. Scenario : In our case we will be using load balancer on 2 tomcat with 1 Nginx . Here we have 2 server both on centos 7 Linux 1. 10.0.205.53 ( Nginx + Tomcat1) 2. 10.0.205.50 ( Tomcat 2 ) 19
  • 20. Types of load balancing in Nginx Dhruba Mandal Nginx supports the following three types of load balancing: 1. Round Robin 2. Least – Connected 3. IP – hash 1. Round-robin – This is the default type for Nginx, which uses the typical round-robin algorithm to decide where to send the incoming request . (i.e request will go one by one on every server) 2. least-connected – As the name suggests, the incoming request will be sent to the server that has the less number of connection. 3. ip-hash – This is helpful when you want to have persistence or stick connection of the incoming request. In this type, the client’s ip-address is used to decide which server the request should be sent to. 20
  • 21. Dhruba Mandal First we have to define the First, upstream: Specify a unique name (may be name of your application) and list all the servers that will be load-balanced by this Nginx. In my case I have named it proxy1 upstream proxy1 Here inside /etc/nginx/conf.d I have made file by name loadbalancer.conf in which I have defined my both server . (file name can be any thing but should have .conf ) 1. Round – robin method : – This is the default type for Nginx, which uses the typical round-robin algorithm to decide where to send the incoming request . (i.e request will go one by one on every server) 21
  • 22. Dhruba Mandal Second, proxy_pass: Specify the unique upstream name that was defined in the previous step as proxy_pass inside your “location” section, which will be under “server” section as shown below. You can define in same file or can make another config file I have made default.conf In proxypass I have given the complete location of my URL with path http://10.0.205.53/ideavodapanel if I give only 10.0.205.53 in my browser it will take whole path 22
  • 23. Dhruba Mandal Instead of 2 files you can also define it in one file vim /etc/nginx/conf.d/default.conf upstream proxy1 { server 10.0.205.53:8080; server 10.0.205.50:8080; } server { listen 80 default_server; server_name _; access_log /etc/nginx/logs/nginx_access.log; error_page 404 /404.html; location = /404.html { root /usr/share/nginx/html; } error_page 500 502 503 504 /50x.html; location = /50x.html { root /usr/share/nginx/html; } ###################################################### ###################################################### location / { proxy_pass http://proxy1/IdeaVodaPanel/; #include /etc/nginx/conf.d/proxy_var; #include /etc/nginx/conf.d/en_compress; } } 23
  • 24. Dhruba Mandal 2. Least – Connected : As the name suggests, the incoming request will be sent to the server that has the less number of connection. For this, add the keyword “least_conn” at the top of the upstream as shown below rest will be same as round robin configuration If you have several servers listed in the least_conn, and if multiple servers are having the similar low number of existing active connections, then among those servers, one will be picked based on weighted round-robin. Note : - The disadvantage of the round-robin and least-connected method is that subsequent connection from the client will not go to the same server in the pool . This may be Ok for a non session dependent application But if your application is dependent on session, then once an initial connection is established with a particular server, then you want all future connection from that particular client to go to the same server. 24
  • 25. Dhruba Mandal 3. The ip_hash method : . ip-hash – This is helpful when you want to have persistence or stick connection of the incoming request. In this type, the client’s ip-address is used to decide which server the request should be sent to. Specifies that a group should use a load balancing method where requests are distributed between servers based on client IP addresses. The first three octets of the client IPv4 address, or the entire IPv6 address, are used as a hashing key. The method ensures that requests from the same client will always be passed to the same server except when this server is unavailable. upstream proxy1 { ip_hash; server 10.0.205.53:8080; server 10.0.205.50:8080; } If one of the servers needs to be temporarily removed, it should be marked with the down parameter in order to preserve the current hashing of client IP addresses. Fig (B) upstream proxy1 { ip_hash; server 10.0.205.53:8080; server 10.0.205.50:8080;down; } Fig : A Fig : B 25
  • 26. Weight Options for the Individual Servers Dhruba Mandal You can also specify a weight for a particular server in your pool. By default, all the servers has the equal priority (weight). i.e. The default value of weight is 1. But, you can change this behavior by assigning a weight to a server as shown below. upstream proxy1 { server 10.0.205.53:8080; server 10.0.205.50:8080; server 192.168.101.3:8080 weight 2; server 192.168.101.4:8080; server 192.168.101.5:8080; } In this example, we have total of 5 servers. But the weight on 3rd server is 2. This means that for every 6 new request, 2 request will go to 3rd server, and rest of the server will get 1 request. you can use weight for least_conn and ip_hash also. 26
  • 27. Timeout Options for the Individual Servers (max_fails and fail _timeout) Dhruba Mandal We can also specify max_fails and fail timeout to a particular server as shown below upstream proxy1 { server 10.0.205.53:8080 max_fails=3 fail_timeout=30s; server 10.0.205.50:8080; server 192.168.101.3:8080 weight 2; server 192.168.101.4:8080; server 192.168.101.5:8080; } The default fail_timeout is 10 seconds. however in example this is set to 30 seconds. This means that within 30 seconds if there were x number of failed attempts (as defined by max_fails), then the server will be unavailable. Also, the server will remain unavailable for 30 seconds. The default max_fails is 1 attempt. this is also set to 3 attempts. This means that after 3 unsuccessful attempts to connect to this particular server, Nginx will consider this server unavailable for the duration of fail_timeout which is 30 seconds. 27
  • 28. Backup Server in Nginx Load Balancer Pool Dhruba Mandal upstream proxy1 { server 10.0.205.53:8080 max_fails=3 fail_timeout=30s; server 10.0.205.50:8080; server 192.168.101.3:8080 weight 2; server 192.168.101.4:8080; server 192.168.101.5:8080 backup; } In above example, the 5th server is marked as backup using “backup” keyword at the end of server parameter. On this 5th server (192.168.101.5) as a backup server. Incoming request will not be passed to this server unless all the other 4 servers are down 28
  • 29. Nginx Reverse Proxy Dhruba Mandal 29 A reverse proxy is a server that sits between internal applications and external clients, forwarding client requests to the appropriate server.
  • 30. Benefit of Nginx Reverse Proxy Dhruba Mandal 30 Load Balancing - A reverse proxy can perform load balancing which helps distribute client requests evenly across backend servers. Increased Security - A reverse proxy also acts as a line of defense for your backend servers Better Performance - Nginx has been known to perform better in delivering static content over Apache. Therefore with an Nginx reverse proxy, all client requests can be handled by Nginx while all requests for dynamic content can be passed on to the backend Apache server. Server cached content : reverse proxies can also be used to serve cached content , hence making faster data available. Easy Logging and Auditing - Since there is only one single point of access when a reverse proxy is implemented, this makes logging and auditing much simpler
  • 31. Configuration NGINX As Reverse Proxy Dhruba Mandal 31 1.Install Nginx ( for configuration refer slide no 15 ) 2. If you are running a firewall, run the following commands to allow HTTP and HTTPS traffic: sudo firewall-cmd --permanent --zone=public --add-service=http sudo firewall-cmd --permanent --zone=public --add-service=https sudo firewall-cmd --reload 3. Enable Nginx to start when your system boots with below command sudo systemctl enable nginx 4. Go to path /etc/nginx/conf.d a. create a file inside it with any name but .conf for example here I am creating file with name reverse.conf
  • 32. Dhruba Mandal 32 5 . Open the file by vim filename and add the below configuration lines vim reverse.conf server { listen 80; listen [::]:80; server_name mongod1;# YOUR DOMAIN NAME location / { proxy_redirect off; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header X-Forwarded-Proto $scheme; proxy_set_header Host $host; proxy_pass http://10.0.205.53:9100;# YOUR TOMCAT IP ADDRESS and port In above mongod1 is my computer domain name and 10.0.205.53 is my IP and 9090 is my port .
  • 33. Dhruba Mandal 33 6 . In your main nginx config file (i.e. nginx.conf) add the path of your directory where configuration file is (in our case it is located at /etc/nginx/conf.d/ ) so we have included /etc/nginx/conf.d/*.conf; expect below shown like comment all other line using #) user nginx; worker_processes auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; events { worker_connections 1024; } http { log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; access_log /var/log/nginx/access.log main; sendfile on; tcp_nopush on; tcp_nodelay on; keepalive_timeout 65; types_hash_max_size 2048; include /etc/nginx/conf.d/*.conf; default_type application/octet-stream; } Now check configuration using nginx –t and restart your nginx by command systemctl restart nginx