Rate limit

API Rate Limiting with Leaky Bucket

Overview

To ensure fair usage and maintain optimal performance of the Smartbills API, we implement a rate limiting mechanism based on the Leaky Bucket algorithm. This method helps control the amount of incoming requests, providing a smooth and predictable flow of API traffic.

Leaky Bucket Algorithm

The leaky bucket algorithm is a traffic shaping mechanism that allows a steady flow of requests while preventing bursts of traffic that can overload the server. It maintains a constant rate of outgoing requests, where excess requests are either queued or rejected based on predefined limits.

How It Works

  1. Bucket Capacity: The bucket has a fixed capacity, representing the maximum number of requests that can be stored temporarily.
  2. Leak Rate: The bucket leaks at a constant rate, allowing a specific number of requests to be processed per second.
  3. Request Handling:
    • When a request is received, it is added to the bucket if there is space available.
    • If the bucket is full, the request is rejected or queued, depending on the implementation.
    • Requests are processed at the leak rate, ensuring a steady flow without sudden spikes.

Rate Limiting Policy

Limits

  • Requests Per Minute (RPM): Each user can make up to 100 requests per minute.
  • Burst Capacity: The bucket can hold up to 100 requests at a time.

Response Codes

When a request exceeds the rate limit, the API will respond with an appropriate HTTP status code:

  • 429 Too Many Requests: This status indicates that the user has exceeded the allowed rate limit. The response will include a Retry-After header specifying the time in seconds until the user can attempt to send a new request.

Example Response

HTTP/1.1 429 Too Many Requests
Content-Type: application/json
Retry-After: 60
 
{
    "status": 429,
    "code":"RATE_LIMITED" ,
    "error": "Rate limit exceeded",
    "message": "You have exceeded your request limit. Please try again later."
}

Best Practices

To optimize your usage of the API and avoid hitting the rate limit:

  • Batch Requests: Whenever possible, combine multiple operations into a single request.
  • Backoff Strategies: Implement exponential backoff in your application to gradually increase the wait time between retries after receiving a 429 response.
  • Monitor Usage: Keep track of your request rates to stay within limits and ensure efficient API usage.