Wisdom
  • Welcome
  • core
    • Flyway
    • Bean Validation
    • Lombok
    • Webclient
      • Generic Webclient Api
      • SSL Certificate
    • Application Event Publisher
    • REST API's Design
      • Http Methods and Status Codes
      • Resource Naming and URL Structure
      • Request / Response Design
      • Versioning Strategies
      • Filtering and Searching In API
    • Spring Boot Mail Integration
      • Sendgrid
    • Template Engines
      • Java Template Engine [JTE]
  • security
    • Complete Guide to URL Matchers in Spring Security: Types, Examples, Pros, Cons, and Best Use Cases
    • Passwordless Login With Spring Security 6
      • Spring Security OneTimeToken
        • One Time Token with default configuration
        • One Time Token with custom configuration
        • One Time Token With Jwt
  • others
    • How to Integrate WhatsApp for Sending Messages in Your Application
  • java
    • Interview Questions
      • Constructor
      • Serialization
      • Abstract Class
    • GSON
      • Type Token
      • Joda Datetime Custom Serializer and Deserializer
  • Nginx
    • Core Concepts and Basics
    • Deep Dive on NGINX Configuration Blocks
    • Deep Dive on NGINX Directives
    • Deep Dive into Nginx Variables
    • Nginx as a Reverse Proxy and Load Balancer
    • Security Hardening in NGINX
    • Performance Optimization & Tuning in NGINX
    • Dynamic DNS Resolution in Nginx
    • Advanced Configuration & Use Cases in NGINX
    • Streaming & Media Delivery in NGINX
    • Final Configuration
  • Angular
    • How to Open a PDF or an Image in Angular Without Dependencies
    • Displaying Colored Logs with Search Highlighting in Angular 6
    • Implementing Auto-Suggestion in Input Field in Angular Template-Driven Forms
    • Creating an Angular Project Using npx Without Installing It Globally
    • Skip SCSS and Test Files in Angular with ng generate
  • Javascript
    • When JavaScript's Set Falls Short for Ensuring Uniqueness in Arrays of Objects
    • Demonstrating a Function to Get the Last N Months in JavaScript
    • How to Convert Numbers to Words in the Indian Numbering System Using JavaScript
    • Sorting Based on Multiple Criteria
  • TYPESCRIPT
    • Using Omit in TypeScript
Powered by GitBook
On this page
  • 1. Optimizing Worker Processes & Worker Connections
  • 2. Understanding keepalive, sendfile, tcp_nodelay, and tcp_nopush
  • 3. GZIP Compression & Brotli Compression
  • 4. Connection Rate Limits & Request Buffers
  • 5. Using HTTP/2 & HTTP/3 (QUIC)
  • 6. Optimizing NGINX for High-Performance Static & Dynamic Content
  • 7. Reverse Proxy Optimizations
  • 8. Other Important Points
  1. Nginx

Performance Optimization & Tuning in NGINX

NGINX is known for its speed and scalability, and a critical part of achieving that performance is fine-tuning its configuration. Whether you're serving static files, dynamic content, or acting as a reverse proxy, optimizing NGINX is key to handling high traffic loads and ensuring low latency.


1. Optimizing Worker Processes & Worker Connections

NGINX uses worker processes to handle incoming requests. The number of workers and the maximum number of connections per worker directly impact how many simultaneous requests your server can manage.

Key Concepts

  • Worker Processes: Ideally set to the number of CPU cores or use auto to let NGINX decide.

  • Worker Connections: Defines how many connections each worker can handle simultaneously.

Example Configuration

# In your nginx.conf

worker_processes auto;  # Automatically sets the number of processes based on available CPU cores

events {
    worker_connections 1024;  # Each worker can handle 1024 connections
    multi_accept on;          # Accept as many connections as possible at once
}

This configuration ensures that NGINX efficiently utilizes available system resources.


2. Understanding keepalive, sendfile, tcp_nodelay, and tcp_nopush

These directives help optimize the way NGINX handles client connections and data transmission.

Key Directives

  • keepalive_timeout: Determines how long a connection is kept open for reuse.

  • sendfile: Offloads file transmission directly from the OS to the client, reducing context switching.

  • tcp_nodelay: Disables Nagle’s algorithm, sending small packets immediately.

  • tcp_nopush: Allows sending HTTP headers in one packet before the body.

Example Configuration

http {
    keepalive_timeout 65;  # Keeps connections open for reuse up to 65 seconds
    sendfile on;           # Enable zero-copy file transmission
    tcp_nodelay on;        # Disable Nagle's algorithm to reduce latency
    tcp_nopush on;         # Optimize packet transmission (commonly used with sendfile)
}

By combining these directives, you ensure that your static and dynamic content is delivered with minimal latency.


3. GZIP Compression & Brotli Compression

Compression is essential for reducing the size of data sent over the network. NGINX supports both GZIP and Brotli compression to improve load times.

GZIP Compression

GZIP is widely used and easy to set up.

Example Configuration

http {
    gzip on;
    gzip_comp_level 5;            # Compression level (1-9)
    gzip_min_length 256;          # Only compress responses larger than 256 bytes
    gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss;
    gzip_vary on;
}

Brotli Compression

Brotli typically provides better compression ratios. Ensure the Brotli module is installed.

Example Configuration

http {
    brotli on;
    brotli_comp_level 5;          # Adjust the compression level as needed
    brotli_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss;
}

Both methods help reduce bandwidth usage and speed up content delivery.


4. Connection Rate Limits & Request Buffers

Rate limiting and buffering help prevent abuse and manage resource utilization.

Rate Limiting

You can protect your server from DoS attacks by limiting the number of requests per second.

Example Configuration

http {
    # Define a shared memory zone for rate limiting
    limit_req_zone $binary_remote_addr zone=one:10m rate=5r/s;

    server {
        location /api/ {
            limit_req zone=one burst=10 nodelay;
            proxy_pass http://backend_api;
        }
    }
}

Request Buffers

Buffers are important for handling large requests without overwhelming your server.

Example Configuration

http {
    client_body_buffer_size 16K;
    client_header_buffer_size 1k;
    large_client_header_buffers 4 8k;
}

These settings help prevent the server from being overwhelmed by large headers or bodies.


5. Using HTTP/2 & HTTP/3 (QUIC)

HTTP/2 and HTTP/3 significantly improve performance through multiplexing, header compression, and reduced latency.

Enabling HTTP/2

HTTP/2 is widely supported and straightforward to enable.

Example Configuration

server {
    listen 443 ssl http2;
    server_name example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    # Other SSL configurations

    location / {
        proxy_pass http://backend;
    }
}

Enabling HTTP/3 (QUIC)

HTTP/3 is newer and requires QUIC support. It may require a recent version of NGINX or third-party modules.

Example Configuration

server {
    listen 443 ssl http2;
    listen 443 quic reuseport;
    server_name example.com;

    ssl_certificate /path/to/cert.pem;
    ssl_certificate_key /path/to/key.pem;
    ssl_protocols TLSv1.3;  # HTTP/3 requires TLS 1.3

    # Enable QUIC-specific options if needed
    add_header Alt-Svc 'h3=":443"';  # Advertise HTTP/3 support

    location / {
        proxy_pass http://backend;
    }
}

This configuration advertises and enables HTTP/3 for improved performance in supported clients.


6. Optimizing NGINX for High-Performance Static & Dynamic Content

Different content types have unique performance challenges. Tuning for static content focuses on caching and file delivery, while dynamic content often benefits from effective reverse proxy settings and caching strategies.

Static Content Optimization

Static files benefit from caching and direct file serving.

Example Configuration

server {
    listen 80;
    server_name static.example.com;

    location / {
        root /var/www/static;
        # Cache static files for 30 days
        expires 30d;
        add_header Cache-Control "public";
        access_log off;
    }
}

Dynamic Content Optimization

For dynamic content, efficient proxying and caching are key.

Example Configuration

server {
    listen 80;
    server_name dynamic.example.com;

    location / {
        proxy_pass http://backend_dynamic;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;

        # Enable proxy caching
        proxy_cache my_cache;
        proxy_cache_valid 200 1m;
    }
}

# Define the proxy cache
proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my_cache:10m max_size=100m inactive=60m use_temp_path=off;

This helps reduce load on dynamic backend servers by caching responses where appropriate.


7. Reverse Proxy Optimizations

When NGINX is used as a reverse proxy, tuning buffers and timeouts is critical to handling backend latency and high traffic.

Example Configuration

server {
    listen 80;
    server_name proxy.example.com;

    location / {
        proxy_pass http://backend_service;
        proxy_connect_timeout 5s;
        proxy_send_timeout 10s;
        proxy_read_timeout 10s;
        send_timeout 10s;

        # Buffer settings for efficient proxying
        proxy_buffer_size 4k;
        proxy_buffers 4 32k;
        proxy_busy_buffers_size 64k;
    }
}

This configuration ensures that slow backends or high traffic conditions don’t lead to timeouts or data loss.


8. Other Important Points

Logging & Monitoring

  • Access and Error Logs: Optimize logging by reducing verbosity in production or routing logs to a centralized system.

Example: Minimal Access Logging

http {
    log_format minimal '$remote_addr - $remote_user [$time_local] "$request" $status';
    access_log /var/log/nginx/access.log minimal;
}

Caching Strategies

  • Static and Proxy Caching: Use caching to reduce load and improve response times.

Timeouts and Buffer Settings

  • Adjust timeouts and buffer sizes: Based on the load, fine-tune these values for optimal performance.

Security Considerations

  • Rate Limiting and DDoS Mitigation: As seen earlier, use rate limiting and other security measures to protect your server.

Module Usage

  • 3rd Party Modules: Depending on your needs, consider modules for advanced metrics (e.g., NGINX Amplify) or additional compression (e.g., Brotli).


PreviousSecurity Hardening in NGINXNextDynamic DNS Resolution in Nginx

Last updated 3 months ago