Skip to main content

Rate Limits

MergeGuide API enforces rate limits to ensure fair usage and service stability.

Rate Limit Tiers

PlanRequests/MinuteRequests/HourRequests/Day
Free601,20014,400
Pro1202,40028,800
Team2004,00048,000
Business3006,00072,000
Enterprise1,00020,000240,000
CustomContact salesContact salesContact sales

Endpoint-Specific Limits

Some endpoints have additional limits:
EndpointLimitWindow
POST /evaluations60per minute
POST /compliance/reports10per hour
GET /evaluations100per minute
Other read endpoints200per minute

Rate Limit Headers

Every response includes rate limit information:
HTTP/1.1 200 OK
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 95
X-RateLimit-Reset: 1705311600
X-RateLimit-Window: 60
HeaderDescription
X-RateLimit-LimitMax requests in window
X-RateLimit-RemainingRequests remaining
X-RateLimit-ResetUnix timestamp when limit resets
X-RateLimit-WindowWindow size in seconds

Rate Limit Exceeded

When rate limited, the API returns:
HTTP/1.1 429 Too Many Requests
Retry-After: 45
X-RateLimit-Limit: 100
X-RateLimit-Remaining: 0
X-RateLimit-Reset: 1705311645

{
  "error": "rate_limited",
  "message": "Rate limit exceeded. Please retry after 45 seconds.",
  "code": "RATE_LIMIT_EXCEEDED",
  "details": {
    "limit": 100,
    "window": "1m",
    "retry_after": 45
  }
}

Handling Rate Limits

Respect Retry-After

Always use the Retry-After header:
async function makeRequest(url: string): Promise<Response> {
  const response = await fetch(url, {
    headers: { Authorization: `Bearer ${apiKey}` }
  });

  if (response.status === 429) {
    const retryAfter = parseInt(response.headers.get('Retry-After') || '60');
    await sleep(retryAfter * 1000);
    return makeRequest(url);  // Retry
  }

  return response;
}

Implement Exponential Backoff

For robustness, combine with exponential backoff:
async function makeRequestWithBackoff(
  url: string,
  maxRetries = 5
): Promise<Response> {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    const response = await fetch(url, {
      headers: { Authorization: `Bearer ${apiKey}` }
    });

    if (response.status === 429) {
      const retryAfter = parseInt(
        response.headers.get('Retry-After') ||
        String(Math.pow(2, attempt))
      );
      const jitter = Math.random() * 1000;
      await sleep(retryAfter * 1000 + jitter);
      continue;
    }

    return response;
  }

  throw new Error('Max retries exceeded');
}

Pre-emptive Rate Limiting

Track remaining requests and throttle proactively:
class RateLimitedClient {
  private remaining = 100;
  private resetTime = Date.now();

  async request(url: string): Promise<Response> {
    // Wait if approaching limit
    if (this.remaining <= 5) {
      const waitTime = this.resetTime - Date.now();
      if (waitTime > 0) {
        await sleep(waitTime);
      }
    }

    const response = await fetch(url, {
      headers: { Authorization: `Bearer ${apiKey}` }
    });

    // Update tracking from headers
    this.remaining = parseInt(
      response.headers.get('X-RateLimit-Remaining') || '100'
    );
    this.resetTime = parseInt(
      response.headers.get('X-RateLimit-Reset') || '0'
    ) * 1000;

    return response;
  }
}

Best Practices

Batch Requests

Instead of multiple single requests:
// Bad: 10 separate requests
for (const file of files) {
  await client.evaluations.create({ files: [file] });
}

// Good: 1 batched request
await client.evaluations.create({ files: files });

Cache Responses

Cache read operations:
const cache = new Map();

async function getPolicies(): Promise<Policy[]> {
  const cacheKey = 'policies';
  const cached = cache.get(cacheKey);

  if (cached && cached.expires > Date.now()) {
    return cached.data;
  }

  const policies = await client.policies.list();

  cache.set(cacheKey, {
    data: policies,
    expires: Date.now() + 5 * 60 * 1000  // 5 minute cache
  });

  return policies;
}

Use Webhooks

For status updates, use webhooks instead of polling:
// Bad: Polling every 5 seconds
while (evaluation.status === 'pending') {
  await sleep(5000);
  evaluation = await client.evaluations.get(evaluationId);
}

// Good: Webhook notification
// Configure webhook at /webhooks
// Handle POST /your-webhook-endpoint
app.post('/your-webhook-endpoint', (req, res) => {
  const { event, data } = req.body;
  if (event === 'evaluation.completed') {
    processEvaluation(data);
  }
  res.sendStatus(200);
});

Spread Requests Over Time

Avoid bursts by spreading requests:
async function processItems(items: Item[]): Promise<void> {
  const REQUESTS_PER_SECOND = 1;
  const INTERVAL = 1000 / REQUESTS_PER_SECOND;

  for (const item of items) {
    await processItem(item);
    await sleep(INTERVAL);
  }
}

Rate Limit Strategies by Use Case

CI/CD Pipelines

  • Use caching for policy lists
  • Batch file changes in single evaluation
  • Queue builds during rate limit periods

Dashboard/UI

  • Cache API responses with appropriate TTL
  • Implement request debouncing
  • Show stale data with refresh option

Background Processing

  • Implement job queues with rate limiting
  • Process in order of priority
  • Log and alert on consistent rate limiting

Requesting Higher Limits

For Enterprise customers needing higher limits:
  1. Contact your account manager
  2. Describe your use case and volume needs
  3. Limits can be customized per-organization

Monitoring

Track Rate Limit Usage

Log rate limit headers for monitoring:
function logRateLimits(response: Response): void {
  const metrics = {
    limit: response.headers.get('X-RateLimit-Limit'),
    remaining: response.headers.get('X-RateLimit-Remaining'),
    reset: response.headers.get('X-RateLimit-Reset'),
    endpoint: response.url
  };

  logger.info('Rate limit status', metrics);

  // Alert if approaching limit
  if (parseInt(metrics.remaining || '100') < 10) {
    alerting.warn('Approaching rate limit', metrics);
  }
}

Dashboard Metrics

Monitor in your MergeGuide dashboard:
  • Settings > Usage shows API usage graphs
  • Set up alerts for rate limit warnings