Rate limits

Per-endpoint rate limits for the Partner API v1. Each endpoint has its own bucket.

Overview

Rate limits on the Partner API are applied per API key, per endpoint. Each endpoint has its own independent counter — hitting the limit on POST /api/v1/requests/create does not consume quota on GET /api/v1/requests or GET /api/v1/merchants.

When you exceed an endpoint's limit, the API returns 429 rate_limited. The limit resets at the start of the next minute.

This page covers the partner-facing v1 API only. Rate limits on the pay page, merchant dashboard, and admin endpoints are in Reference → Rate limits.

Per-endpoint limits

All limits are per API key, 1-minute rolling window.

EndpointLimitScope
POST /api/v1/requests/create60 / minCreate a request
GET /api/v1/requests/:id60 / minGet a request
GET /api/v1/requests60 / minList requests
POST /api/v1/requests/:id/cancel60 / minCancel a request
POST /api/v1/merchants30 / minCreate a merchant
GET /api/v1/merchants60 / minList merchants
GET /api/v1/merchants/:id60 / minGet a merchant

Effective aggregate capacity per key, assuming evenly distributed calls: 420 requests per minute across the v1 surface. In practice most integrations use 1–2 endpoints heavily and stay well under any individual limit.

Why per-endpoint instead of a global bucket?

Partner workloads are asymmetric. A PSP running reconciliation makes heavy use of GET /api/v1/requests but rarely creates requests or merchants. A checkout integration is the opposite. A single global bucket would penalize the heavy-read workload for read traffic that doesn't affect write capacity at all.

The one explicit trade-off: POST /api/v1/merchantsis capped at 30 / min — lower than the others — because merchant creation is expensive (wallet provisioning, onboarding email, assignment row, audit log). Bulk-imports at higher rates require coordination with support.

429 response shape

HTTP/1.1 429 Too Many Requests
Content-Type: application/json

{
  "ok": false,
  "error": "rate_limited"
}

Handling 429

  1. Don't retry immediately. The limit is per-minute; waiting 60 seconds resets it. Any retry sooner is guaranteed to fail.
  2. Use exponential backoff with jitter. If you do retry (e.g., a bulk-import job resumed mid-flight), start at 60 seconds with ±10s jitter so retries don't synchronize across your workers.
  3. Spread load across endpoints. If you're hitting POST /api/v1/requests/create at the limit, audit whether each call is necessary — e.g., can you batch the same metadata pattern into fewer requests?
  4. Prefer webhooks over polling. If you're polling GET /api/v1/requests/:id for status, configure a webhook instead. See Webhooks.
Rate-limit headers (X-RateLimit-Remaining, Retry-After) are not currently emitted. Treat 429 as a signal to back off for a full minute; don't rely on client-side counters that could drift from the server's view.

A few operations outside the v1 surface have limits partners should know about:

OperationLimitNotes
Webhook delivery timeout8 seconds per attemptYour endpoint must return 2xx within 8 s or the delivery is marked failed. See Webhooks → Retries.
FX rate refresh (manual)1 / min per partnerAdmin-triggered refresh of the fiat-per-USDC rate. Automatic refreshes on a schedule are unlimited.
Webhook URL / secret updates10 / min per partnerPartner-portal configuration changes, not webhook deliveries themselves.

What next?