API Rate Limiting
Arlo enforces rate limits on all API requests to ensure fair use and platform stability for all users.
The limit is 300 requests per minute per platform. When a client exceeds this threshold,
the API can return an HTTP 429 Too Many Requests response. This page explains which endpoints are
most commonly affected, and how to write code that handles and avoids rate limiting.
Applies to
Rate limiting applies to all API calls, including the Pub REST API and the Auth REST API.
Rate limit HTTP response
When a rate limit is exceeded the API responds with:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
Content-Type: application/json
{
"error": "rate_limit_exceeded",
"message": "You have exceeded the allowed number of requests. Please wait before retrying.",
"retryAfter": 60
}
The Retry-After response header indicates the number of seconds the client should wait before making another request.
Always read and honour this header in your integration code.
Commonly affected endpoints
The following endpoints are the common sources of rate limit violations. Pay particular attention to request frequency when calling these:
| API | Entity | URL |
|---|---|---|
| Pub | EventSearch | /api/2012-02-01/pub/resources/eventsearch/ |
| Auth | EventSessions | /api/2012-02-01/auth/resources/eventsessions/ |
| Auth | Contacts | /api/2012-02-01/auth/resources/contacts/ |
| Auth | Venues | /api/2012-02-01/auth/resources/venues/ |
| Auth | EventTemplates | /api/2012-02-01/auth/resources/eventtemplates/ |
| Auth | Resources | /api/2012-02-01/auth/resources/ |
Common patterns that trigger rate limiting
The examples below show valid use cases implemented in ways that frequently cause 429 responses,
followed by the recommended mitigation. In your own integration, wrap each fetch call in a helper
that checks for a 429 status, reads the Retry-After header, and waits that many
seconds before retrying. Add a short delay (200–500 ms) between paginated requests to avoid
bursting through the limit.
Pattern 1: Tight polling loop over a large collection
Paging through contacts or sessions with no delay between requests rapidly exhausts rate limits.
Problematic — no delay and no 429 handling:
while (nextUrl) {
const response = await fetch(nextUrl, options); // no 429 check
const data = await response.json();
processContacts(data.Items);
nextUrl = data.Next ? data.Next.href : null; // immediately follows next link
}
Recommended — use a retry helper and add a delay between pages:
while (nextUrl) {
const response = await fetchWithRetry(nextUrl, options); // respects Retry-After on 429
const data = await response.json();
processContacts(data.Items);
nextUrl = data.Next ? data.Next.href : null;
if (nextUrl) await sleep(500); // polite delay between pages
}
Pattern 2: Per-record session fetching (N+1 requests)
Fetching sessions individually inside an event loop fires one request per event, quickly hitting the limit at scale.
Problematic — one /sessions/ request per event:
for (const eventId of eventIds) {
// N events = N rapid API calls
const res = await fetch(`/api/2012-02-01/auth/resources/events/${eventId}/sessions/`, options);
const data = await res.json();
schedule.push(...data.Items);
}
Recommended — query the top-level EventSessions collection once:
let nextUrl = '/api/2012-02-01/auth/resources/eventsessions/?orderby=StartDateTime ASC';
while (nextUrl) {
const res = await fetchWithRetry(nextUrl, options);
const data = await res.json();
schedule.push(...data.Items);
nextUrl = data.Next ? data.Next.href : null;
if (nextUrl) await sleep(500);
}
Pattern 3: Uncached EventSearch on every page load
Calling EventSearch on every render without caching can exceed rate limits during traffic spikes, such as when a marketing email is sent.
Problematic — live fetch on every render:
const res = await fetch('/api/2012-02-01/pub/resources/eventsearch/?format=json');
const data = await res.json();
displayEvents(data.Items);
Recommended — serve from a short-lived sessionStorage cache:
const cached = sessionStorage.getItem('arlo_eventsearch');
if (cached) {
const { ts, data } = JSON.parse(cached);
if (Date.now() - ts < 60000) { displayEvents(data.Items); return; }
}
const res = await fetchWithRetry('/api/2012-02-01/pub/resources/eventsearch/?format=json', {});
if (res.ok) {
const data = await res.json();
sessionStorage.setItem('arlo_eventsearch', JSON.stringify({ ts: Date.now(), data }));
displayEvents(data.Items);
}
General best practices
- Always check the HTTP response status code before processing a response body. Handle
429explicitly. - Read and honour the
Retry-Afterresponse header. Do not use a fixed back-off value if the header is present. - Add a small delay (e.g. 200–500 ms) between sequential page requests when paginating through large collections.
- Prefer a single collection-level request with filters over multiple per-record requests (avoid the N+1 request pattern).
- Use incremental synchronisation with
LastModifiedDateTimefilters instead of re-fetching entire collections on every sync run. - If you anticipate high-volume usage, contact Arlo support to discuss your integration requirements.
