Skip to main content
Every API key has a per-minute request budget. Default: 600 req/min.

Response headers

Every 2xx response includes:
X-RateLimit-Limit: 600
X-RateLimit-Remaining: 437
Remaining is how many more requests fit in the current minute window.

429 responses

HTTP/1.1 429 Too Many Requests
Retry-After: 43
Content-Type: application/json

{"detail":"Rate limit exceeded"}
Retry-After is seconds until the next minute boundary. A well-behaved client sleeps for that duration and retries. Our SDKs do this automatically:
  • Python: up to 3 automatic retries on 429, respecting Retry-After.
  • TypeScript: same behavior.
  • Override with retry_on_429=False if you’d rather handle it yourself.

Fixed-window burst

The algorithm is a per-minute fixed-window counter — not a rolling window. At the minute boundary, 2× the limit can pass in a single second: the last second of minute A + the first second of minute B. For most customers this doesn’t matter. If you need strict smoothing, implement a client-side token bucket on top.

Tier options

Tierrate_limit_per_minTypical use
Free / trial60Development, small-scale testing
Paid default600Production CRM sync, warehouse pipelines
Enterprise6,000High-volume campaigns, real-time dashboards
Contact us for limits above 6,000/min. Most real workloads bat well under 600/min when SDK pagination is used correctly.

If Redis is down

We fail open during our own infrastructure outages — requests that would hit the rate limiter get allowed through. This is a deliberate tradeoff: a DoS control shouldn’t block paying customers when we’re the ones broken. We monitor for the condition and fall back to load-balancer rate limiting in emergencies.