Rate Limiting

The REST API service relies on the existing N-central core engine to provide the data. Some of these core engine functions are very resource intensive and should not be run in parallel or at the same time. In order to support this, N-central uses simple rate limiting with very strict limits (high threshold).

Rate limiting for N-central's REST APIs

  • Per-endpoint:

    • Rate of endpoint calls in a set amount of time.

    • Limit concurrent number of request (in-flight requests).

Each endpoint has a specific rate limit applied to it. The rate limitation is applied on concurrent calls per end point. In other words, the limit is on the number of concurrent calls allowed for each endpoint. The limits vary based on endpoints as some are very resource intensive. This limit is any where between 3 to 50 concurrent calls.

Handling Rate-Limited Requests

If a limit is reached, a 429 HTTP access code will be returned. The caller is responsible for handling this status code and implement a retry mechanism.

For ALL the endpoints, the caller should be able to handle this error and try again after waiting. The mechanism recommended for handling the 429 response is to ‘back off exponentially’. For example, wait 2 seconds, then retry. If you still receive the 429 response, wait 5 seconds and retry. If you still receive the 429 response, wait 10 seconds and retry, and so on.