Advanced Configuration & Use Cases in NGINX

1. NGINX as an API Gateway

Overview

Using NGINX as an API gateway allows you to consolidate request routing, authentication, rate limiting, caching, and logging in a single layer. This centralizes API management and improves overall application performance.

Key Features

  • Path-based Routing: Direct traffic to different backend services based on the URL.

  • Rate Limiting & Authentication: Prevent abuse and secure endpoints.

  • Caching: Reduce latency by serving repeated requests quickly.

Example Configuration

Below is an example of how you might configure NGINX as an API gateway:

http {
    # Define upstream servers for your microservices
    upstream user_service {
        server 10.0.0.1:8080;
        server 10.0.0.2:8080;
    }

    upstream order_service {
        server 10.0.1.1:8080;
        server 10.0.1.2:8080;
    }

    server {
        listen 80;
        server_name api.example.com;

        # Rate limiting example
        limit_req_zone $binary_remote_addr zone=api_rate:10m rate=5r/s;

        location /users/ {
            limit_req zone=api_rate burst=10;
            proxy_pass http://user_service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
            # Additional headers or auth checks can be added here
        }

        location /orders/ {
            limit_req zone=api_rate burst=10;
            proxy_pass http://order_service;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Why This Configuration?

  • Centralized Control: All API requests are funnelled through one entry point.

  • Scalability: Upstream pools can be scaled horizontally without affecting client connectivity.

  • Security & Throttling: Rate limiting prevents abuse, and you can integrate authentication modules as needed.


2. Handling Large File Uploads & Streaming

Overview

Handling large file uploads or streaming media poses unique challenges. NGINX provides mechanisms to buffer requests, limit upload sizes, and manage streaming efficiently.

Key Considerations

  • client_max_body_size: Sets the maximum allowed size of a client request body.

  • Buffering: Adjusting buffering settings to prevent excessive memory usage.

  • Time-outs: Preventing hung connections during lengthy uploads or streams.

Example Configuration for File Uploads

server {
    listen 80;
    server_name upload.example.com;

    # Increase max file upload size to 500 MB
    client_max_body_size 500M;

    location /upload {
        proxy_pass http://backend_upload_service;
        proxy_buffering off;  # Stream data directly to backend
        proxy_request_buffering off;
    }
}

Example Configuration for Streaming

server {
    listen 80;
    server_name stream.example.com;

    location /live/ {
        # Enable caching and buffering for smoother streaming
        proxy_buffering on;
        proxy_buffers 16 4k;
        proxy_busy_buffers_size 8k;
        proxy_pass http://streaming_backend;
    }
}

Why These Configurations?

  • Optimized Resource Usage: Adjusting buffer settings ensures that large files or streams do not overwhelm the server’s memory.

  • User Experience: Proper streaming configurations reduce latency and provide a smoother playback experience.

  • Flexibility: Disabling buffering for file uploads can minimize delays when immediate processing by the backend is required.


3. Using NGINX for Microservices with Service Discovery

Overview

In a microservices architecture, services can dynamically scale, come online, or be decommissioned. Integrating NGINX with service discovery mechanisms ensures that traffic is always directed to healthy instances.

Techniques

  • Dynamic Upstream Resolution: Using DNS or a service discovery tool like Consul/etcd.

  • Reload Automation: Automatically reload configurations when service endpoints change.

Example Configuration Using DNS-Based Service Discovery

http {
    resolver 8.8.8.8 valid=30s;  # Use a reliable DNS resolver

    upstream microservice_backend {
        # Use DNS names that resolve to service endpoints
        server microservice1.service.consul:8080;
        server microservice2.service.consul:8080;
    }

    server {
        listen 80;
        server_name micro.example.com;

        location /api/ {
            proxy_pass http://microservice_backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Why This Approach?

  • High Availability: Automatic resolution of service instances minimizes downtime.

  • Dynamic Scalability: As services scale up or down, NGINX continues routing requests to available instances.

  • Simplicity: Relying on DNS for discovery reduces the complexity of maintaining static configurations.


4. Canary Deployments & Blue-Green Deployment Strategies

Overview

Deployment strategies like canary and blue-green enable safe, incremental rollouts. NGINX can facilitate these strategies by directing a portion of traffic to new versions while the bulk of traffic continues to hit the stable release.

Techniques

  • Weighted Load Balancing: Distributing traffic based on specified weights.

  • Split Clients: Routing a percentage of users to different upstream servers.

Example: Weighted Canary Deployment

http {
    upstream backend {
        server stable_backend:8080 weight=90;   # 90% of the traffic
        server canary_backend:8080 weight=10;     # 10% of the traffic
    }

    server {
        listen 80;
        server_name deploy.example.com;

        location / {
            proxy_pass http://backend;
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Example: Blue-Green Deployment Using Split Clients

http {
    split_clients "${remote_addr}AAA" $upstream_group {
        95%       stable;
        *         green;
    }

    upstream stable_backend {
        server stable1:8080;
        server stable2:8080;
    }

    upstream green_backend {
        server green1:8080;
        server green2:8080;
    }

    server {
        listen 80;
        server_name bg.example.com;

        location / {
            if ($upstream_group = "stable") {
                proxy_pass http://stable_backend;
            }
            if ($upstream_group = "green") {
                proxy_pass http://green_backend;
            }
            proxy_set_header Host $host;
            proxy_set_header X-Real-IP $remote_addr;
        }
    }
}

Why These Strategies?

  • Risk Mitigation: Canary deployments help catch issues with new releases without affecting all users.

  • Seamless Rollback: Blue-green deployments enable immediate rollback by simply switching traffic between environments.

  • User Segmentation: Advanced routing ensures a controlled exposure of new features to subsets of users.


5. Running NGINX in a Containerized Environment (Docker)

Overview

Containerization has become the norm for deploying applications. Running NGINX in Docker or Kubernetes environments requires careful configuration to ensure portability, scalability, and manageability.

Best Practices

  • Immutable Infrastructure: Use versioned Docker images for consistency.

  • Health Checks: Ensure your NGINX container is monitored and restarted if unhealthy.

Example: Dockerfile for NGINX

FROM nginx:stable-alpine
COPY nginx.conf /etc/nginx/nginx.conf
EXPOSE 80
COPY /myapp/ /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
  • FROM nginx:stable-alpine: Uses the official NGINX alpine image.

  • COPY nginx.conf: Adds your custom configuration file, from local to nginx config

  • EXPOSE 80 : Exposes port 80 for running nginx

  • COPY /myapp/ /usr/share/nginx/html : Copies the local files to image

Building the docker image -

docker build -t myapp:latest .

Running the docker container-

docker run -d -p 80:80 myapp:latest

Running Angular App In Nginx With Docker

# Stage 1: Compile and Build angular codebase
# Use official node image as the base image
FROM node:latest as build

# Set the working directory 
WORKDIR /usr/local/app

# Add the source code to app
COPY ./ /usr/local/app/

# Install all the dependencies
RUN npm install

# Generate the build of the application
RUN npm run build

# Stage 2: Serve app with nginx server

# Use official nginx image as the base image
FROM nginx:stable-alpine

# Copy the build output to replace the default nginx contents.
COPY --from=build /usr/local/app/dist/sample-angular-app /usr/share/nginx/html

# Expose port 80
EXPOSE 80

Building the docker image -

docker build -t my-angular-app:latest .

Running the docker container-

docker run -d -p 4200:80 my-angular-app:latest

The app will run on localhost:4200 in local system, In prod we can change port to 80.

Docker-componse.yml [This file should be in same location as dockerfile]

version: '3.8'

services:
  angular-app:
    # Use the official Node image to build the Angular app
    build:
      context: .
      dockerfile: Dockerfile
      target: build # Build using the build stage from Dockerfile
    # Use nginx to serve the app after build
    image: my-angular-app:latest
    # Expose port 80 from the container to 4200 on your local machine
    ports:
      - "4200:80"
    # Restart policy to restart container unless manually stopped
    restart: unless-stopped

Why Containerization?

  • Portability: Containers allow your NGINX configuration to run consistently across development, testing, and production.

  • Scalability: Orchestration platforms like Kubernetes manage scaling and load balancing for you.

  • Automated Rollouts: Integration with CI/CD pipelines ensures seamless updates.


Last updated