Rate Limiting
This guide explains Terminal49’s API rate limits and provides strategies for working efficiently within these limits.
Understanding Rate Limits
Rate limiting is a technique used to control the amount of incoming and outgoing traffic to or from a network or service. Terminal49 implements rate limiting to ensure fair usage of the API across all users and to maintain overall system stability and performance.
Current Rate Limits
Terminal49’s API uses the following rate limiting structure:
| Plan | Requests per minute | Requests per hour | Requests per day |
|---|
| Standard | 60 | 1,000 | 10,000 |
| Enterprise | 120 | 5,000 | 50,000 |
These limits may be adjusted based on your specific needs. If you’re consistently hitting rate limits, please contact our support team to discuss options for increasing your quota.
How Rate Limiting Works
Request Counting
Each API request made with your API key counts toward your rate limit. The following requests all count as separate requests:
GET /shipments
GET /containers
POST /tracking_requests
Terminal49 returns the following headers with each API response to help you monitor your rate limit usage:
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1617192000
The maximum number of requests allowed in the current time window.
The number of requests remaining in the current time window.
The time at which the current rate limit window resets, in Unix time.
Rate Limit Exceeded Response
When you exceed the rate limit, the API will return a 429 Too Many Requests status code with the following response body:
{
"errors": [
{
"status": "429",
"title": "Too Many Requests",
"detail": "Rate limit exceeded. Please wait and try again later."
}
]
}
The response will also include headers indicating when you can retry your request.
Best Practices for Handling Rate Limits
Implement Retries with Exponential Backoff
When you receive a 429 response, implement a retry mechanism with exponential backoff:
async function makeRequestWithRetry(endpoint, maxRetries = 3) {
let retries = 0;
while (retries < maxRetries) {
try {
const response = await fetch(`https://api.terminal49.com/v2/${endpoint}`, {
headers: {
'Authorization': `Token ${API_KEY}`,
'Content-Type': 'application/vnd.api+json'
}
});
if (response.status === 429) {
// Parse the Retry-After header if available
const retryAfter = response.headers.get('Retry-After');
const waitTime = retryAfter ? parseInt(retryAfter) * 1000 : Math.pow(2, retries) * 1000;
console.log(`Rate limit exceeded. Retrying after ${waitTime}ms...`);
await new Promise(resolve => setTimeout(resolve, waitTime));
retries++;
continue;
}
return response;
} catch (error) {
console.error('API request failed:', error);
retries++;
if (retries >= maxRetries) {
throw error;
}
// Exponential backoff
const waitTime = Math.pow(2, retries) * 1000;
await new Promise(resolve => setTimeout(resolve, waitTime));
}
}
throw new Error('Maximum retry attempts reached');
}
Batch Requests When Possible
Instead of making individual requests for each resource, use the include parameter to fetch related resources in a single request:
// Instead of:
GET /shipments/123
GET /containers/456
GET /containers/789
// Use:
GET /shipments/123?include=containers
This approach not only reduces the number of requests counting toward your rate limit but also improves overall performance.
Implement Request Queuing
For applications that need to make many API requests, implement a request queue that respects the rate limits:
class RequestQueue {
constructor(requestsPerMinute = 60) {
this.queue = [];
this.processing = false;
this.interval = 60 * 1000 / requestsPerMinute; // Distribute requests evenly
}
async add(endpoint, method = 'GET', data = null) {
return new Promise((resolve, reject) => {
this.queue.push({ endpoint, method, data, resolve, reject });
if (!this.processing) {
this.processQueue();
}
});
}
async processQueue() {
this.processing = true;
while (this.queue.length > 0) {
const { endpoint, method, data, resolve, reject } = this.queue.shift();
try {
const result = await this.makeApiRequest(endpoint, method, data);
resolve(result);
} catch (error) {
reject(error);
}
// Wait before processing the next request
await new Promise(resolve => setTimeout(resolve, this.interval));
}
this.processing = false;
}
// Your API request implementation here
async makeApiRequest(endpoint, method, data) {
// ...
}
}
// Usage:
const queue = new RequestQueue(60); // 60 requests per minute
const shipments = await queue.add('shipments');
const containers = await queue.add('containers');
Cache Responses
Implement caching for responses that don’t change frequently to reduce the number of API calls:
const cache = new Map();
async function fetchWithCache(endpoint, cacheTime = 5 * 60 * 1000) { // 5 minutes
const cacheKey = endpoint;
const cachedData = cache.get(cacheKey);
if (cachedData && cachedData.timestamp > Date.now() - cacheTime) {
console.log(`Using cached data for ${endpoint}`);
return cachedData.data;
}
const response = await fetch(`https://api.terminal49.com/v2/${endpoint}`, {
headers: {
'Authorization': `Token ${API_KEY}`,
'Content-Type': 'application/vnd.api+json'
}
});
const data = await response.json();
cache.set(cacheKey, {
timestamp: Date.now(),
data
});
return data;
}
Monitor Your Usage
Implement monitoring to track your API usage and detect when you’re approaching rate limits:
function monitorRateLimits(response) {
const limit = parseInt(response.headers.get('X-RateLimit-Limit'));
const remaining = parseInt(response.headers.get('X-RateLimit-Remaining'));
const resetTime = new Date(parseInt(response.headers.get('X-RateLimit-Reset')) * 1000);
const usagePercentage = ((limit - remaining) / limit) * 100;
console.log(`API Rate Limit: ${remaining}/${limit} remaining (${usagePercentage.toFixed(2)}%)`);
console.log(`Resets at: ${resetTime.toLocaleTimeString()}`);
// Alert if usage is high
if (usagePercentage > 80) {
console.warn('API rate limit usage is high! Consider reducing request frequency.');
}
}
Enterprise Considerations
Webhooks as an Alternative
For high-frequency updates to resources, consider using webhooks instead of polling the API:
// Instead of polling every few minutes:
setInterval(async () => {
const shipments = await fetchWithCache('shipments?filter[status]=active');
// Process shipments...
}, 5 * 60 * 1000);
// Set up a webhook endpoint to receive updates in real-time
app.post('/webhook/shipment-updates', (req, res) => {
const shipmentEvent = req.body;
// Process shipment update in real-time
res.status(200).send('Event received');
});
Request Rate Allocation
If you’re building an application with multiple components or users that share the same API key, consider implementing a token bucket algorithm to fairly distribute your rate limit quota:
class TokenBucket {
constructor(capacity, fillPerSecond) {
this.capacity = capacity;
this.tokens = capacity;
this.lastFill = Date.now();
this.fillPerSecond = fillPerSecond;
}
take(count = 1) {
this.refill();
if (this.tokens >= count) {
this.tokens -= count;
return true;
}
return false;
}
refill() {
const now = Date.now();
const deltaSeconds = (now - this.lastFill) / 1000;
const newTokens = deltaSeconds * this.fillPerSecond;
this.tokens = Math.min(this.capacity, this.tokens + newTokens);
this.lastFill = now;
}
}
// Usage:
// For 60 requests per minute (1 per second)
const bucket = new TokenBucket(60, 1);
async function makeRateLimitedRequest(endpoint) {
if (bucket.take()) {
return await fetch(`https://api.terminal49.com/v2/${endpoint}`);
} else {
// Wait and retry
await new Promise(resolve => setTimeout(resolve, 1000));
return makeRateLimitedRequest(endpoint);
}
}
Getting Help
If you consistently hit rate limits despite implementing these best practices, you may need a higher quota for your use case. Contact Terminal49 support at support@terminal49.com with:
- Your API key ID (not the actual key)
- The endpoints that are hitting rate limits
- Your typical request patterns and frequency
- Business justification for needing a higher limit
Next Steps
- Learn about Error Handling for dealing with rate limit errors
- Explore Webhooks as an alternative to frequent API polling
- Understand Pagination to efficiently retrieve large datasets