Basic Rate Limiting

Get started with RouteMQ's rate limiting middleware to control message processing rates and protect your application from abuse.

Quick Setup

The simplest way to add rate limiting to your routes:

from app.middleware.rate_limit import RateLimitMiddleware

# Basic rate limiting - 100 requests per minute
rate_limiter = RateLimitMiddleware(
    max_requests=100,
    window_seconds=60
)

# Apply to a route
router.on("api/{endpoint}", 
          ApiController.handle,
          middleware=[rate_limiter])

Configuration Options

Basic Parameters

Common Configurations

API Endpoints

IoT Device Data

Public Endpoints

Understanding Rate Limiting

How It Works

  1. Request Arrives: A message is received on a topic

  2. Key Generation: A unique key is generated (default: based on topic)

  3. Count Check: Current request count is checked against the limit

  4. Decision: Request is either allowed or blocked

  5. Counter Update: Request count is incremented if allowed

Rate Limit Response

When rate limit is exceeded, the middleware returns:

Context Information

Rate limiting information is added to the context:

Storage Backends

For distributed applications, use Redis for rate limiting:

Benefits of Redis backend:

  • Shared across multiple application instances

  • Persistent across application restarts

  • High performance with atomic operations

  • Supports advanced algorithms

Memory Backend (Fallback)

When Redis is unavailable, in-memory fallback is used:

Memory backend characteristics:

  • Per-instance rate limiting only

  • Lost on application restart

  • No coordination between instances

  • Suitable for single-instance deployments

Error Handling

Custom Error Messages

Graceful Degradation

Monitoring Rate Limits

Logging

Rate limiting events are automatically logged:

Context Information

Access rate limit status in your handlers:

Testing Rate Limits

Unit Testing

Integration Testing

Performance Considerations

Key Generation Efficiency

The default key generator uses the topic, which means rate limiting is applied per topic:

Redis Connection Pooling

Rate limiting middleware uses the shared Redis connection pool from redis_manager, ensuring efficient connection reuse.

Memory Usage

In-memory fallback automatically cleans up expired entries to prevent memory leaks:

Common Use Cases

API Rate Limiting

Device Telemetry

Public Endpoints

Troubleshooting

Common Issues

Rate limiting not working:

  • Check if Redis is enabled and accessible

  • Verify fallback_enabled setting

  • Check logs for Redis connection errors

Different behavior between instances:

  • Ensure all instances use the same Redis backend

  • Check Redis key prefix consistency

  • Verify Redis configuration

Memory usage growing:

  • Memory fallback automatically cleans up expired entries

  • Check cleanup interval settings

  • Monitor memory usage in single-instance deployments

Debug Logging

Enable debug logging to troubleshoot rate limiting:

Next Steps

Last updated