Rate limits
Per-plan rate limits on the GigaQR REST API, the headers returned on every response, and how to back off.
Every REST API response includes two headers that tell you exactly where you stand:
x-ratelimit-limit: 120
x-ratelimit-remaining: 117Limits are enforced per-key, per-minute. If you're under the limit, requests succeed. If you're over, the next request returns 429 with:
{
"error": "Rate limit exceeded",
"code": "rate_limited"
}Per-plan caps
| Plan | REST API (requests/min) | Bulk batch size |
|---|---|---|
| Free | — | — |
| Starter | 60 | 100 per job |
| Pro | 120 | 1,000 per job |
| Business | 600 | 10,000 per job |
Hitting a 429 does not count as an error for billing or audit purposes. We just drop the request on the floor before it touches the database.
Backing off
Start with a simple exponential backoff: on 429, wait 1 second, then 2, then 4, up to a cap of 30. Reset the counter on the next successful response. For most GigaQR integrations this is more than enough — the only workload that regularly saturates the limit is bulk analytics export, and that's better served by /scans/export than by paginating /qr/{id}/scans.
Long-running imports
If you're creating thousands of QRs in one push, use POST /qr/bulk instead of looping POST /qr. Bulk jobs are counted as a single request against the rate limit, and the heavy lifting happens asynchronously on our side.
When the limit is wrong for you
Business-plan customers with predictable high-volume traffic can get a raised per-minute cap. Email [email protected] with your expected volume and use-case.