The meetergo API implements rate limiting to ensure fair usage and system stability.
Rate Limits
| Limit Type | Value | Window |
|---|
| Standard rate | 100 requests | Per minute |
| Burst allowance | Up to 200 requests | Short burst |
Limits are applied per API key.
Rate Limit Response
When you exceed the rate limit, you’ll receive:
{
"statusCode": 429,
"message": "Too Many Requests",
"error": "Too Many Requests"
}
Handling Rate Limits
Exponential Backoff
Implement exponential backoff for 429 responses:
async function callWithBackoff(fn, maxRetries = 5) {
for (let attempt = 0; attempt < maxRetries; attempt++) {
try {
return await fn();
} catch (error) {
if (error.statusCode !== 429) {
throw error;
}
const delay = Math.pow(2, attempt) * 1000 + Math.random() * 1000;
console.log(`Rate limited. Retrying in ${delay}ms...`);
await new Promise(resolve => setTimeout(resolve, delay));
}
}
throw new Error('Max retries exceeded');
}
Python Example
import time
import random
def call_with_backoff(fn, max_retries=5):
for attempt in range(max_retries):
try:
return fn()
except MeetergoApiError as e:
if e.status_code != 429:
raise
delay = (2 ** attempt) + random.random()
print(f"Rate limited. Retrying in {delay:.1f}s...")
time.sleep(delay)
raise Exception("Max retries exceeded")
Best Practices
Batch Operations
Instead of making many individual requests, batch where possible:
// Instead of this:
for (const userId of userIds) {
const availability = await getAvailability(userId);
}
// Do this (if your use case allows):
const availabilities = await Promise.all(
userIds.slice(0, 10).map(id => getAvailability(id))
);
Cache Responses
Cache responses that don’t change frequently:
const cache = new Map();
const CACHE_TTL = 60 * 1000; // 1 minute
async function getCachedMeetingTypes(userId) {
const cacheKey = `meeting-types:${userId}`;
const cached = cache.get(cacheKey);
if (cached && Date.now() - cached.timestamp < CACHE_TTL) {
return cached.data;
}
const data = await getMeetingTypes(userId);
cache.set(cacheKey, { data, timestamp: Date.now() });
return data;
}
Request Queuing
Queue requests to stay within limits:
class RateLimitedQueue {
constructor(requestsPerMinute = 100) {
this.queue = [];
this.processing = false;
this.interval = 60000 / requestsPerMinute;
}
async add(fn) {
return new Promise((resolve, reject) => {
this.queue.push({ fn, resolve, reject });
this.process();
});
}
async process() {
if (this.processing || this.queue.length === 0) return;
this.processing = true;
const { fn, resolve, reject } = this.queue.shift();
try {
const result = await fn();
resolve(result);
} catch (error) {
reject(error);
}
setTimeout(() => {
this.processing = false;
this.process();
}, this.interval);
}
}
// Usage
const queue = new RateLimitedQueue(100);
const result = await queue.add(() => createBooking(data));
Spread Requests Over Time
Avoid bursting all requests at once:
async function processWithDelay(items, fn, delayMs = 600) {
const results = [];
for (const item of items) {
const result = await fn(item);
results.push(result);
// Wait between requests
await new Promise(resolve => setTimeout(resolve, delayMs));
}
return results;
}
// Process 100 items with ~600ms between each = ~100/minute
const results = await processWithDelay(items, processItem, 600);
Monitoring
Track Request Counts
Monitor your API usage:
let requestCount = 0;
let windowStart = Date.now();
function trackRequest() {
const now = Date.now();
// Reset counter every minute
if (now - windowStart > 60000) {
console.log(`Requests in last minute: ${requestCount}`);
requestCount = 0;
windowStart = now;
}
requestCount++;
// Warn when approaching limit
if (requestCount > 80) {
console.warn(`Approaching rate limit: ${requestCount}/100`);
}
}
Log Rate Limit Errors
Track when you hit rate limits:
async function callApi(endpoint, options) {
try {
trackRequest();
return await fetch(endpoint, options);
} catch (error) {
if (error.statusCode === 429) {
// Log for monitoring
console.error('Rate limit hit', {
endpoint,
timestamp: new Date().toISOString(),
requestsThisMinute: requestCount
});
}
throw error;
}
}
Summary
Stay under 100 requests/minute for consistent performance
Implement exponential backoff for 429 responses
Cache when possible to reduce unnecessary requests
Queue requests for bulk operations
Monitor usage to catch issues early