Nginx fragments that can be mastered in 30s

Original link: http://jartto.wang/2023/03/12/30-seconds-of-nginx/

As a programmer, online deployment is indispensable for daily research and development. Once you embark on the road of “infrastructure”, you will find that Nginx is a hurdle you cannot avoid. It is no exaggeration to say: Nginx can hold up half the sky!

Maybe you will refute, we have a professional operation and maintenance ( OP ) team, don’t worry about it. However, the actual situation is that OP is occupied with heavy work orders every day, and you are queuing up all the time. This is true for large companies and even more so for small companies. Therefore, reserve some Nginx knowledge, which will definitely make you get twice the result with half the effort.

This article summarizes 15 Nginx configuration fragments that appear frequently in daily development. Because they are short, you only need 30 seconds to master them.

1. Cross-domain configuration

Due to the browser’s security policy, the probability of the front-end processing cross-domain requests is extremely high. The following is the general method for enabling cross-domain requests.

 1
2
3
4
5
6
7
 if ( $request_method = OPTIONS ) {
add_header "Access-Control-Allow-Origin" *;
add_header "Access-Control-Allow-Methods" "GET, POST, OPTIONS, HEAD" ;
add_header "Access-Control-Allow-Headers" "Authorization, Origin, X-Requested-With, Content-Type, Accept" ;
add_header 'Access-Control-Max-Age' 600;
return 200;
}

2. Turn on GZip compression

If you want to compress regular file types, you can refer to the following code:

 1
2
3
4
5
6
7
8
9
 http
{
include conf/mime.types;
gzip on;
gzip_min_length 1000;
gzip_buffers 4 8k;
gzip_http_version 1.1;
gzip_types text/plain application/x-javascript text/css application/xml application/javascript application/json;
}

GZip involves many parameters, including gzip_comp_level , gzip_proxied etc. For details, please refer to: Nginx Configuration – Gzip Compression

3. Passing cookies across domains

For versions after Chrome 80 , Cookie cannot cross domains by default, unless the server sets same-site attribute ( strict , lax , none ) in the response header.

  • Strict is the most stringent, completely prohibiting third-party Cookie , and will not send Cookie under any circumstances when crossing sites. In other words, only when URL of the current web page is consistent with the request target, Cookie will be brought. This rule is too strict and may cause a very bad user experience. For example, if the current web page has a GitHub link, the user will not have GitHub Cookie when clicking on the link, and the link will always be in the unlogged state.
  • None , Cookie can only be sent via the HTTPS protocol. Secure attribute must be set at the same time ( Cookie can only be sent through HTTPS protocol), otherwise it will be invalid.

     1
     Set-Cookie: widget_session=abc123; SameSite=None; Secure
  • The Lax rules are slightly relaxed, and third-party Cookie are not sent in most cases, except for Get requests that navigate to the target URL.

Another way is to use proxy_pass reverse proxy. If it is just Host and port conversion, Cookie will not be lost. When visiting again, the browser will send the current Cookie . Of course, if the path changes, you need to set the path conversion of Cookie .

 1
2
3
4
 location /foo {
proxy_pass http://localhost:4000;
proxy_cookie_path /foo "/; SameSite=None; HTTPOnly; Secure" ;
}

4. Health check

The Nginx server will actively send a check request to upstream_server at the backend according to the set interval to verify the status of each upstream_server at the backend.
If a certain server fails to return more than a certain number of times, such as 3 times, the server will be marked as abnormal, and the request will not be forwarded to the server. In general, the backend server needs to provide a low-consumption interface for this kind of health check.

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 http {
# Specify an upstream load balancing group named evalue
upstream evalue {
# Define the node service in the group. If the weight parameter is not added, the default is Round Robin. Adding the weight parameter is weighted round robin
server 192.168.90.100:9999 weight=100;
server 192.168.90.101:9999 weight=100;

# interval=3000 The check interval is 3 seconds, rise=3 means the service is healthy if it succeeds 3 times in a row, fall=5 means it fails 5 times in a row, it means the service is unhealthy, timeout=3000 the timeout period of the health check is 3 seconds, type=http check type http
check interval=3000 rise=3 fall=5 timeout=3000 type =http;

# check_http_send Set check behavior: request type url request protocol -> HEAD /api/v1/health HTTP/1.0
check_http_send "HEAD /api/v1/health HTTP/1.0\r\n\r\n" ;

# Set the response status that is considered normal
check_http_expect_alive http_2xx http_3xx;
}
}

5. Pan domain name analysis

To configure domain name pan resolution in Nginx , you can use the wildcard character * to realize that the sub-domain name points to the same IP address. The following is a simple Nginx configuration example:

 1
2
3
4
5
 server {
listen 80;
server_name *.jartto.com;
root /var/www/jartto.com;
}

The above configuration points all domain names ending with .jartto.com to the websites under the /var/www/jartto.com directory. If you want to configure a second-level domain name, you can use the following Nginx configuration:

 1
2
3
4
5
 server {
listen 80;
server_name *.sub.jartto.com;
root /var/www/sub.jartto.com;
}

The above configuration points all sub-domains ending in .sub.example.com to the websites under the /var/www/sub.example.com directory. It should be noted that when using pan-parsing, it may cause some security problems. Therefore, it is recommended to use pan-analysis only when necessary, and to fully protect the website.

6. Use $request_id to implement link tracking

Nginx provided a built-in variable $request_id in version 1.11.0 . The principle is to generate a 32-bit random string. Although the probability of UUID cannot be compared, the repetition probability of a 32-bit random string is negligible, so it can generally be Treat it as UUID .

 1
2
3
4
5
6
7
8
9
10
 location ^~ /habo/gid {
add_header Cache-Control no-store;
default_type application/javascript;
set $unionId $cookie_GID ;
if ( $unionId = "" ) {
set $unionId $request_id ;
add_header Set-Cookie "GID= ${unionId} ;path=/habo/;max-age= ${GID_MAX_AGE} " ;
}
return 200 "document.cookie='GID= ${unionId} ;path=/;max-age= ${GID_MAX_AGE} '" ;
}

7. Current limiting configuration

Nginx provides two current limiting methods: controlling the rate and controlling the number of concurrent connections.

Among them, controlling the rate refers to limiting the number of requests per unit time, and controlling the number of concurrent connections refers to limiting the number of requests processed at the same time.

The following is a simple Nginx current limiting configuration example, using the leaky bucket algorithm for current limiting:

 1
2
3
4
5
6
7
8
9
10
 http {
limit_req_zone $binary_remote_addr zone=one:10m rate=1r/s;

server {
location / {
limit_req zone=one burst=5;
proxy_pass http://backend;
}
}
}

In the above configuration, limit_req_zone is used to define a shared memory zone named one and associate it with the client IP address. This shared memory area occupies a maximum of 10MB and allows 1 request per second. Then, the limit_req directive is used in location block to enable the current limiting function. Here, burst is set to 5, which means that when the client sends more than one request in a short period of time, it can temporarily tolerate a certain number of requests exceeding the limit.
It should be noted that in practical applications, parameter values ​​need to be adjusted according to specific situations to achieve the best results. as follows:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
 map $http_baggage_flow $plimit {
"ptest" $server_name ;
default "" ;
}


limit_req_zone $plimit zone=prelimit:10m rate=600r/s;
server {
listen 443 ssl;
server_name m.gaotu100.com;
limit_req zone=prelimit nodelay;
limit_req_status 530;

location = /530.html {
default_type application/json;
return 200 '{"status" : 530}' ;
}
...
}

8. History secondary routing refresh problem

When vue-router+webpack project is deployed online, the single-page project routing will cause a 404 problem when refreshing the page. Generally, the configuration needs to be as follows:

 1
2
3
4
5
6
7
8
9
 location / {
root html;
try_files $uri $uri / @router;
index index.html index.htm;

}
location @router {
rewrite ^.*$ /index.html last;
}

9. Cookie routing identification

The most common scenario is grayscale release, where Nginx identifies the traffic from the front end and forwards it.

 1
2
3
4
5
6
7
8
 map $http_cookie $m_upstream {
~*baggage-version=isolute-feat-.*$ al-bj-sre-k8s-test-istio-gateway;
default test.jartto.com;
}

upstream al-bj-sre-k8s-test-istio-gateway {
server 47.95.128.11:80;
}

10. Enable image conversion on the server side

Here is mainly to set the picture in WebP format, if you don’t know it yet, please check: Analysis and practice of WebP scheme .

 1
2
3
4
 map $http_accept $webp_suffix {
default "";
"~*webp" ".webp";
}
 1
2
3
4
5
6
7
 location ~* ^/_nuxt/img/(.+\.png|jpe?g)$ {
rewrite ^/_nuxt/img/(.+\.png|jpe?g)$ /$1 break;
root /apps/srv/instance/test-webp.gaotu100.com/.nuxt/dist/client/img/;
add_header Vary Accept;
try_files $uri $webp_suffix $uri =404;
expires 30d;
}

Eleven, load balancing

There are usually four algorithms for load balancing:

  • Polling, the default method, each request is allocated to different back-end servers one by one in chronological order, if the back-end service hangs up, it can be automatically eliminated;
  • weight , weight distribution, specifies the polling probability, the higher the weight, the greater the probability of being accessed, and it is used in the case of uneven performance of the back-end server;
  • ip_hash , each request is allocated according to hash result of the access IP , so that each visitor accesses a backend server fixedly, which can solve the problem of dynamic web session sharing. Each load balancing request will be relocated to one of the server clusters, so if a user who has logged in to a server is relocated to another server, his login information will be lost, which is obviously inappropriate;
  • fair (third party), allocated according to the response time of the back-end server, priority allocation for short response time, relying on the third-party plug-in nginx-upstream-fair , please install it before use;
     1
    2
    3
    4
    5
    6
    7
    8
    9
    10
    11
    12
    13
    14
    15
    16
     http {
    upstream jartto-server {
    # ip_hash; # ip_hash method
    # fair; # fair way
    server 127.0.0.1:8081; # load balancing destination service address
    server 127.0.0.1:8080;
    server 127.0.0.1:8082 weight=10; # weight mode, if not written, the default is 1
    }

    server {
    location / {
    proxy_pass http://jartto-server;
    proxy_connect_timeout 10;
    }
    }
    }

12. Configuring HTTPS

To configure HTTPS on Nginx , you can follow these steps:

  1. Install the SSL module for Nginx . This can be done by compiling Nginx with the --with-http_ssl_module option or by installing a prebuilt package that includes the SSL module.
  2. Obtain an SSL certificate for your domain from a trusted Certificate Authority CA . This can be done by purchasing a certificate or getting a free certificate from Let's Encrypt .
  3. Configure Nginx to use SSL certificate and key files. This involves adding the following line to your Nginx configuration file:

     1
    2
     ssl_certificate /path/to/jartto.crt;
    ssl_certificate_key /path/to/jartto.key;
  4. Configure Nginx to redirect HTTP requests to HTTPS if needed. This can be done using a server block that listens on port 80 and redirects all requests to port 443 (the default HTTPS port).

  5. Restart Nginx to apply the changes.

Below is an example Nginx configuration file that enables HTTPS and redirects HTTP requests to HTTPS :

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
 server {
listen 80;
server_name jartto.com www.jartto.com;
return 301 https:// $server_name $request_uri ;
}

server {
listen 443 ssl;
server_name jartto.com www.jartto.com;

ssl_certificate /path/to/jartto.crt;
ssl_certificate_key /path/to/jartto.key;
# Other SSL-related settings go here
# Other server block settings go here
}

The configuration file above listens on both ports 80 and 443 , but serves content over HTTPS only on port 443 . All HTTP requests are redirected to their HTTPS URL equivalents using the return statement in the first server block.

Thirteen, picture anti-leech

If you don’t want your pictures to be referenced casually by the external network, you can configure the picture anti-leeching capability, the configuration is as follows:

 1
2
3
4
5
6
7
8
9
10
11
12
 server {
listen 80;
server_name *.jartto.com;

# Picture anti-leech
location ~* \.(gif|jpg|jpeg|png|bmp|swf)$ {
valid_referers none blocked server_names ~\.google\. ~\.baidu\. *.qq.com; # Only allow local IP external link references
if ( $invalid_referer ){
return 403;
}
}
}

Fourteen, configure multiple Servers

To configure multiple servers in Nginx , multiple server blocks can be defined in the Nginx configuration file. Each server block represents a separate virtual server that can listen on a different port or IP address and serve different content. Here is an example of how to configure two virtual servers in Nginx :

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
 http {
server {
listen 80;
server_name www.jartto.com;
root /var/www/jartto.com;

# other server configuration directives
}

server {
listen 80;
server_name www.another-jartto.com;
root /var/www/another-jartto.com;

# other server configuration directives
}
}

In the above example, we defined two virtual servers listening on port 80 . The first virtual server is configured to serve content for the domain www.jartto.com from the directory /var/www/jartto.com . The second virtual server is configured to serve content for the domain www.another-jartto.com from the directory /var/www/another-jartto.com . Each server block can have its own set of configuration directives, such as SSL certificates, access logs, error pages, and more.

By defining multiple server blocks in the Nginx configuration file, we can host multiple websites or applications on a single Nginx instance.

15. Dynamically modify the configuration module

ngx_dynamic_upstream is a module for dynamically manipulating upstreams using HTTP API , such as ngx_http_upstream_conf . If you want to dynamically modify Nginx configuration information, try the following code:

 1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
twenty one
twenty two
 upstream backends {
zone zone_for_backends 1m;
server 127.0.0.1:6001;
server 127.0.0.1:6002;
server 127.0.0.1:6003;
}


server {
listen 6000;

location /dynamic {
allow 127.0.0.1;
deny all;
dynamic_upstream;
}


location / {
proxy_pass http://backends;
}
}

The usage is as follows:

 1
2
3
4
 $ curl "http://127.0.0.1:6000/dynamic?upstream=zone_for_backends&verbose="
server 127.0.0.1:6001 weight=1 max_fails=1 fail_timeout=10;
server 127.0.0.1:6002 weight=1 max_fails=1 fail_timeout=10;
server 127.0.0.1:6003 weight=1 max_fails=1 fail_timeout=10;

This article is transferred from: http://jartto.wang/2023/03/12/30-seconds-of-nginx/
This site is only for collection, and the copyright belongs to the original author.