Rate Limits
LoomAPI implements rate limiting to ensure fair usage and system stability. Understanding and properly handling rate limits is crucial for building reliable integrations.
Rate Limit Overview
Rate limits are enforced per API key and help prevent abuse while ensuring all users have fair access to our services.
Current Limits
| Endpoint | Limit | Window |
|---|---|---|
/v1/verify | 100 requests | per minute |
/v1/status | 1,000 requests | per minute |
/v1/webhooks | 100 requests | per minute |
| All other endpoints | 100 requests | per minute |
Rate Limit Headers
Every API response includes rate limit information in the headers:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 97
X-RateLimit-Reset: 1640995200
X-RateLimit-Retry-After: 30
Header Explanations
X-RateLimit-Limit: Maximum requests allowed in the current windowX-RateLimit-Remaining: Number of requests remaining in the current windowX-RateLimit-Reset: Unix timestamp when the rate limit resetsX-RateLimit-Retry-After: Seconds to wait before making another request (only present when limit exceeded)
Handling Rate Limits
Exponential Backoff
Implement exponential backoff when you hit rate limits:
async function makeRequestWithBackoff(requestFn, maxRetries = 3) {
let retryCount = 0
while (retryCount < maxRetries) {
try {
const response = await requestFn()
// Check for rate limit headers
const remaining = response.headers.get('X-RateLimit-Remaining')
const resetTime = response.headers.get('X-RateLimit-Reset')
if (remaining && parseInt(remaining) === 0) {
const waitTime = calculateBackoffTime(retryCount, resetTime)
await new Promise(resolve => setTimeout(resolve, waitTime))
retryCount++
continue
}
return response
} catch (error) {
if (error.status === 429) {
const retryAfter = error.headers.get('X-RateLimit-Retry-After')
const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : calculateBackoffTime(retryCount)
await new Promise(resolve => setTimeout(resolve, waitTime))
retryCount++
continue
}
throw error
}
}
throw new Error('Max retries exceeded')
}
function calculateBackoffTime(retryCount, resetTime = null) {
if (resetTime) {
const reset = new Date(parseInt(resetTime) * 1000)
const now = new Date()
return Math.max(0, reset.getTime() - now.getTime())
}
// Exponential backoff: 1s, 2s, 4s, 8s...
return Math.pow(2, retryCount) * 1000
}
Rate Limit Response
When you exceed rate limits, you'll receive a 429 Too Many Requests response:
{
"request_id": "req_1234567890",
"error": {
"code": "RATE_LIMIT_EXCEEDED",
"message": "Rate limit exceeded. Try again in 30 seconds.",
"details": {
"retry_after_seconds": 30,
"limit": 100,
"window_seconds": 60
}
}
}
Best Practices
📊 Monitor Usage
- Track your API usage in the dashboard
- Set up alerts for approaching rate limits
- Log rate limit headers for debugging
⚡ Optimize Requests
- Batch requests when possible
- Cache results appropriately
- Use webhooks for asynchronous processing
🏗️ Architecture Considerations
- Implement request queuing
- Use multiple API keys for different services
- Consider upgrading your plan for higher limits
Rate Limit Increases
Need higher rate limits? Contact us about enterprise plans:
- Standard: 100 requests/minute
- Professional: 1,000 requests/minute
- Enterprise: Custom limits
Email enterprise@loomapi.com for custom pricing.
Troubleshooting
Common Issues
- Bursty Traffic: Implement request smoothing or queuing
- Background Jobs: Schedule intensive operations during off-peak hours
- Client-side Limits: Don't rely solely on client-side rate limiting
Testing Rate Limits
Use test mode to simulate rate limiting without consuming production quota:
curl -X POST https://api.loomapi.com/v1/verify \
-H "Authorization: Bearer your_test_key" \
-H "Content-Type: application/json" \
-d '{
"document_type": "passport",
"document_data": "base64_data",
"test_mode": true
}'
Test mode responses include rate limit headers but don't count toward your actual limits.