โšก Rate Limits

Understanding API rate limits and how to handle them effectively.

โšก Rate Limits

Understanding API rate limits and how to handle them effectively.

Overview

The rPPG API implements rate limiting to ensure fair usage and maintain service quality for all users.

Default Limits

| Limit Type | Default Value | Window | |-----------|---------------|--------| | Requests per API key | 60 requests | 1 minute | | Concurrent sessions | 10 active | Per key | | Max file size | 100 MB | Per upload | | Session lifetime | 24 hours | Per session |

Rate Limit Headers

Every API response includes rate limit information in the headers:

HTTP/1.1 200 OK
Content-Type: application/json
X-RateLimit-Limit: 60
X-RateLimit-Remaining: 58
X-RateLimit-Reset: 1635264000

Header Definitions

  • X-RateLimit-Limit: Total requests allowed in the current window
  • X-RateLimit-Remaining: Requests remaining in the current window
  • X-RateLimit-Reset: Unix timestamp when the limit resets

Rate Limit Response

When you exceed the rate limit, the API returns a 429 Too Many Requests response:

{
  "detail": "Rate limit exceeded. Try again in 45 seconds.",
  "retry_after": 45
}

Handling Rate Limits

With the SDK (Automatic)

The SDK automatically handles rate limits with built-in retry logic:

import { RPPGClient, RateLimitError } from 'rppg-api-client';

const client = new RPPGClient({
  apiKey: 'your-api-key',
  baseUrl: 'https://api.yourdomain.com/api',
  maxRetries: 3,      // Retry up to 3 times
  retryDelay: 1000    // Start with 1 second delay
});

try {
  const vitals = await client.analyzeVideo(videoFile);
} catch (error) {
  if (error instanceof RateLimitError) {
    console.log(`Rate limited. Retry after ${error.retryAfter} seconds`);
  }
}

Manual Handling

If calling the API directly, implement retry logic:

async function callWithRetry(apiCall, maxRetries = 3) {
  for (let attempt = 0; attempt < maxRetries; attempt++) {
    try {
      const response = await apiCall();
      return response;
      
    } catch (error) {
      if (error.status === 429) {
        const retryAfter = error.headers['retry-after'] || Math.pow(2, attempt);
        console.log(`Rate limited. Retrying in ${retryAfter}s...`);
        await new Promise(resolve => setTimeout(resolve, retryAfter * 1000));
      } else {
        throw error;
      }
    }
  }
  
  throw new Error('Max retries exceeded');
}

// Usage
const vitals = await callWithRetry(() => 
  fetch('https://api.yourdomain.com/api/v1/scan-session/sess_xxx/vitals', {
    headers: { 'Authorization': 'Bearer your-key' }
  })
);

Exponential Backoff

Best practice for retrying rate-limited requests:

async function exponentialBackoff(fn, maxRetries = 5) {
  let delay = 1000; // Start with 1 second
  
  for (let i = 0; i < maxRetries; i++) {
    try {
      return await fn();
    } catch (error) {
      if (error.status === 429 && i < maxRetries - 1) {
        console.log(`Retry ${i + 1}/${maxRetries} after ${delay}ms`);
        await new Promise(resolve => setTimeout(resolve, delay));
        delay *= 2; // Double the delay each time
      } else {
        throw error;
      }
    }
  }
}

Best Practices

1. Monitor Rate Limit Headers

Check headers before you hit the limit:

async function smartRequest(url, options) {
  const response = await fetch(url, options);
  
  const remaining = response.headers.get('X-RateLimit-Remaining');
  const reset = response.headers.get('X-RateLimit-Reset');
  
  if (remaining < 5) {
    const resetTime = new Date(reset * 1000);
    console.warn(`Only ${remaining} requests left. Resets at ${resetTime}`);
  }
  
  return response;
}

2. Implement Request Queuing

Queue requests to avoid bursts:

class RateLimiter {
  constructor(maxRequests, windowMs) {
    this.maxRequests = maxRequests;
    this.windowMs = windowMs;
    this.queue = [];
    this.timestamps = [];
  }
  
  async execute(fn) {
    // Remove timestamps outside the window
    const now = Date.now();
    this.timestamps = this.timestamps.filter(
      t => now - t < this.windowMs
    );
    
    // Wait if at limit
    if (this.timestamps.length >= this.maxRequests) {
      const oldestTimestamp = this.timestamps[0];
      const waitTime = this.windowMs - (now - oldestTimestamp);
      await new Promise(resolve => setTimeout(resolve, waitTime));
    }
    
    // Execute and record
    this.timestamps.push(Date.now());
    return await fn();
  }
}

// Usage
const limiter = new RateLimiter(60, 60000); // 60 per minute

for (const video of videos) {
  await limiter.execute(async () => {
    return await client.analyzeVideo(video);
  });
}

3. Batch Operations

Process multiple items in controlled batches:

async function batchProcess(items, batchSize, delayMs) {
  const results = [];
  
  for (let i = 0; i < items.length; i += batchSize) {
    const batch = items.slice(i, i + batchSize);
    
    const batchResults = await Promise.all(
      batch.map(item => client.analyzeVideo(item))
    );
    
    results.push(...batchResults);
    
    // Delay between batches
    if (i + batchSize < items.length) {
      await new Promise(resolve => setTimeout(resolve, delayMs));
    }
  }
  
  return results;
}

// Process 5 videos at a time with 2-second delays
const results = await batchProcess(videos, 5, 2000);

4. Cache Results

Avoid redundant API calls by caching results:

class CachedRPPGClient {
  constructor(apiKey, baseUrl) {
    this.client = new RPPGClient({ apiKey, baseUrl });
    this.cache = new Map();
  }
  
  async analyzeVideo(videoFile) {
    const fileHash = await this.hashFile(videoFile);
    
    if (this.cache.has(fileHash)) {
      console.log('Returning cached result');
      return this.cache.get(fileHash);
    }
    
    const vitals = await this.client.analyzeVideo(videoFile);
    this.cache.set(fileHash, vitals);
    
    return vitals;
  }
  
  async hashFile(file) {
    const buffer = await file.arrayBuffer();
    const hashBuffer = await crypto.subtle.digest('SHA-256', buffer);
    return Array.from(new Uint8Array(hashBuffer))
      .map(b => b.toString(16).padStart(2, '0'))
      .join('');
  }
}

Rate Limit Tiers

Different API keys may have different rate limits based on your plan:

| Plan | Requests/Min | Concurrent Sessions | Max File Size | |------|--------------|---------------------|---------------| | Free | 10 | 2 | 50 MB | | Basic | 60 | 10 | 100 MB | | Pro | 300 | 50 | 200 MB | | Enterprise | Custom | Custom | Custom |

Contact your account manager to upgrade your plan.

Checking Your Current Limits

Query your current rate limit status:

// Make any API call and check headers
const response = await fetch('https://api.yourdomain.com/api/health');

console.log('Rate Limit:', response.headers.get('X-RateLimit-Limit'));
console.log('Remaining:', response.headers.get('X-RateLimit-Remaining'));
console.log('Resets at:', new Date(
  response.headers.get('X-RateLimit-Reset') * 1000
));

Session-Based Limits

In addition to request rate limits, there are limits on active sessions:

Concurrent Sessions

  • Default: 10 active sessions per API key
  • Active sessions are those with status processing or pending
  • Completed or failed sessions don't count toward the limit

Session Lifetime

  • Default: 24 hours
  • After 24 hours, sessions expire and their data is deleted
  • Always retrieve results within 24 hours of creating a session

Optimization Tips

1. Reuse Sessions (Not Recommended)

While possible, avoid reusing sessions. Create a new session for each video:

// โŒ Don't do this
const session = await client.createSession();
await client.uploadVideo(session.session_id, video1);
await client.uploadVideo(session.session_id, video2); // Overwrites!

// โœ… Do this instead
const session1 = await client.createSession();
await client.uploadVideo(session1.session_id, video1);

const session2 = await client.createSession();
await client.uploadVideo(session2.session_id, video2);

2. Optimize Polling

Reduce polling frequency as processing time increases:

async function adaptivePolling(sessionId) {
  let interval = 2000; // Start with 2 seconds
  const maxInterval = 10000; // Max 10 seconds
  
  while (true) {
    const result = await client.getVitals(sessionId);
    
    if (result.status === 'completed' || result.status === 'failed') {
      return result;
    }
    
    await new Promise(resolve => setTimeout(resolve, interval));
    
    // Gradually increase interval
    interval = Math.min(interval * 1.2, maxInterval);
  }
}

3. Compress Videos

Reduce upload time and bandwidth:

// Use lower bitrate or resolution for faster uploads
async function compressVideo(file) {
  // Implementation depends on your environment
  // For browser: Use MediaRecorder with lower bitrate
  // For Node.js: Use ffmpeg
  
  const mediaRecorder = new MediaRecorder(stream, {
    videoBitsPerSecond: 500000 // 500 kbps
  });
  
  // ...
}

Handling 429 Errors

Complete example with error handling:

import { RPPGClient, RateLimitError } from 'rppg-api-client';

async function analyzeWithRateLimitHandling(videoFile) {
  const client = new RPPGClient({
    apiKey: 'your-key',
    baseUrl: 'https://api.yourdomain.com/api'
  });
  
  try {
    const vitals = await client.analyzeVideo(videoFile);
    return { success: true, data: vitals };
    
  } catch (error) {
    if (error instanceof RateLimitError) {
      // Wait and retry
      const retryAfter = error.retryAfter || 60;
      console.log(`Rate limited. Waiting ${retryAfter} seconds...`);
      
      await new Promise(resolve => 
        setTimeout(resolve, retryAfter * 1000)
      );
      
      // Retry once
      try {
        const vitals = await client.analyzeVideo(videoFile);
        return { success: true, data: vitals };
      } catch (retryError) {
        return {
          success: false,
          error: 'Rate limit exceeded after retry',
          retryable: true
        };
      }
    }
    
    return {
      success: false,
      error: error.message,
      retryable: false
    };
  }
}

Monitoring Usage

Track your API usage over time:

class UsageTracker {
  constructor() {
    this.requests = [];
  }
  
  trackRequest(endpoint, status, remaining) {
    this.requests.push({
      endpoint,
      status,
      remaining,
      timestamp: Date.now()
    });
  }
  
  getStats(windowMs = 60000) {
    const now = Date.now();
    const recent = this.requests.filter(
      r => now - r.timestamp < windowMs
    );
    
    return {
      total: recent.length,
      successful: recent.filter(r => r.status < 400).length,
      rateLimited: recent.filter(r => r.status === 429).length,
      avgRemaining: recent.reduce((sum, r) => sum + r.remaining, 0) / recent.length
    };
  }
}

const tracker = new UsageTracker();

// Wrap client calls
const originalAnalyze = client.analyzeVideo;
client.analyzeVideo = async function(file) {
  const result = await originalAnalyze.call(this, file);
  tracker.trackRequest('/analyze', 200, result.remaining);
  return result;
};

Questions?

If you need higher rate limits or have specific usage patterns that don't fit the default limits, please contact your account manager or email support@yourdomain.com.


Next Steps

Have questions? Contact us at support@circadify.com