See the following blog post on the Webex Developer Portal for complete details on this application: Rate Limiting with the Webex API.
This simple Python demonstration shows how to properly handle rate limiting when making requests to the Webex APIs, implementing the Retry-After header pattern for robust API integration.
- π Rate Limit Handling - Proper implementation of HTTP 429 response handling
- β° Retry-After Implementation - Respects server-provided retry delays
- π Progress Monitoring - Real-time feedback during sleep periods
- π― Continuous Testing - Infinite loop for demonstrating rate limit behavior
- π Response Tracking - HTTP status codes and tracking IDs for debugging
- β‘ Chunked Sleep - User-friendly progress updates during long waits
- Python 3.x
- Valid Webex personal access token
- Internet connection for API calls
-
Clone or download the file:
wget https://raw.githubusercontent.com/WebexSamples/WebexRetryAfterDemo/main/retryafterdemo.py # or manually download retryafterdemo.py
-
Configure your access token:
# Edit retryafterdemo.py bearer = "YOUR_PERSONAL_ACCESS_TOKEN_HERE"
-
Run the demonstration:
python retryafterdemo.py
-
Observe the output:
- Normal API responses with status codes and tracking IDs
- Rate limit detection and Retry-After header handling
- Progress updates during sleep periods
- Visit Webex Developer Portal
- Log in with your Webex account
- Copy your personal access token
- Replace
PERSONAL_ACCESS_TOKEN
in the code
# Normal successful response
200 1640995200.123456 Y2lzY29zcGFyazovL3VzL1RSQUNLSU5HX0lE
# Rate limit detected
code 429
headers Server: nginx
Retry-After: 60
Date: Thu, 01 Jan 2024 12:00:00 GMT
...
Sleeping for 60 seconds
Asleep for 50 more seconds
Asleep for 40 more seconds
...
-
Normal Operation:
- Continuous API calls to
/v1/rooms
endpoint - Display of HTTP 200 responses with timestamps
- Tracking ID logging for debugging purposes
- Continuous API calls to
-
Rate Limit Triggered:
- HTTP 429 status code detection
- Retry-After header value extraction
- Intelligent sleep with progress updates
- Automatic resumption after wait period
def sendWebexGET(url):
"""Send authenticated GET request to Webex API"""
request = urllib.request.Request(url,
headers={"Accept": "application/json",
"Content-Type": "application/json"})
request.add_header("Authorization", "Bearer " + bearer)
response = urllib.request.urlopen(request)
return response
# Main execution loop
while True:
try:
result = sendWebexGET('https://webexapis.com/v1/rooms')
print(result.getcode(), time.time(), result.headers['Trackingid'])
except urllib.error.HTTPError as e:
if e.code == 429:
# Handle rate limiting
print('code', e.code)
print('headers', e.headers)
print('Sleeping for', e.headers['Retry-After'], 'seconds')
# Chunked sleep for better user experience
sleep_time = int(e.headers['Retry-After'])
while sleep_time > 10:
time.sleep(10)
sleep_time -= 10
print('Asleep for', sleep_time, 'more seconds')
time.sleep(sleep_time)
else:
# Handle other HTTP errors
print(e, e.code)
break
Component | Description | Purpose |
---|---|---|
Infinite Loop | while True: continuous execution |
Demonstrate repeated API calls |
Exception Handling | try/except for HTTP errors |
Catch and handle rate limits |
Retry-After Header | e.headers['Retry-After'] |
Server-specified wait time |
Chunked Sleep | 10-second intervals with updates | User-friendly progress tracking |
Bearer Authentication | Authorization: Bearer header |
Webex API authentication |
When rate limits are exceeded, Webex APIs return:
HTTP/1.1 429 Too Many Requests
Retry-After: 60
Date: Thu, 01 Jan 2024 12:00:00 GMT
Server: nginx
Content-Type: application/json
{
"message": "Rate limit exceeded",
"errors": [
{
"description": "Too Many Requests"
}
]
}
Value Type | Example | Description |
---|---|---|
Seconds | 60 |
Wait 60 seconds before retry |
HTTP Date | Thu, 01 Jan 2024 12:01:00 GMT |
Wait until specified time |
- Always Check for 429: Specifically handle rate limit responses
- Respect Retry-After: Use server-provided wait times
- Graceful Degradation: Continue operation after rate limits
- User Feedback: Provide progress updates during waits
- Error Handling: Manage other HTTP errors appropriately
API Category | Typical Limits | Retry Behavior |
---|---|---|
REST APIs | 300 requests/minute | Exponential backoff |
Admin APIs | 100 requests/minute | Fixed Retry-After |
Compliance | 50 requests/minute | Longer wait periods |
Recordings | 20 requests/minute | Progressive delays |
The demo will trigger rate limits by:
- Making continuous requests without delays
- Exceeding the
/v1/rooms
endpoint limits - Demonstrating real-world rate limiting scenarios
# Progressive sleep with user feedback
sleep_time = int(e.headers['Retry-After'])
while sleep_time > 10:
time.sleep(10) # Sleep in 10-second chunks
sleep_time -= 10 # Decrement remaining time
print('Asleep for', sleep_time, 'more seconds') # Progress update
time.sleep(sleep_time) # Final sleep for remainder
# Add delays to reduce rate limiting
import time
while True:
try:
result = sendWebexGET('https://webexapis.com/v1/rooms')
print(result.getcode(), time.time(), result.headers['Trackingid'])
time.sleep(2) # Add 2-second delay between requests
except urllib.error.HTTPError as e:
# Rate limiting logic...
# Test various API endpoints
endpoints = [
'https://webexapis.com/v1/rooms',
'https://webexapis.com/v1/people/me',
'https://webexapis.com/v1/messages'
]
for endpoint in endpoints:
result = sendWebexGET(endpoint)
print(f"{endpoint}: {result.getcode()}")
# Check rate limit headers in responses
def print_rate_limit_info(response):
headers = response.headers
if 'X-RateLimit-Limit' in headers:
print(f"Rate Limit: {headers['X-RateLimit-Limit']}")
if 'X-RateLimit-Remaining' in headers:
print(f"Remaining: {headers['X-RateLimit-Remaining']}")
if 'X-RateLimit-Reset' in headers:
print(f"Reset Time: {headers['X-RateLimit-Reset']}")
Issue | Solution |
---|---|
Invalid Token | Replace PERSONAL_ACCESS_TOKEN with valid token |
No Rate Limits | Add more aggressive request patterns or remove delays |
Connection Errors | Check internet connectivity and API status |
Permission Errors | Ensure token has appropriate scopes |
# Add debug information
try:
result = sendWebexGET('https://webexapis.com/v1/rooms')
print(f"Success: {result.getcode()} at {time.time()}")
print(f"Tracking ID: {result.headers.get('Trackingid', 'None')}")
print(f"Response Headers: {dict(result.headers)}")
except urllib.error.HTTPError as e:
print(f"HTTP Error: {e.code}")
print(f"Error Headers: {dict(e.headers)}")
print(f"Error Message: {e.read().decode()}")
- Proxy Settings: Configure urllib for corporate proxies
- SSL Verification: Handle certificate validation if needed
- Timeout Settings: Add request timeouts for reliability
This demo teaches:
- HTTP Status Code Handling: Understanding 429 responses
- Header Processing: Reading and using Retry-After values
- Exception Management: Graceful error handling in Python
- API Best Practices: Respectful rate limit behavior
- User Experience: Providing feedback during waits
Apply these patterns to:
- Data Migration: Bulk operations with rate limiting
- Monitoring Tools: Regular API polling with backoff
- Integration Services: Reliable third-party API usage
- Batch Processing: Large-scale operations with throttling
# Production-ready rate limiting
import time
import random
class RateLimitHandler:
def __init__(self, max_retries=3):
self.max_retries = max_retries
def make_request(self, url):
for attempt in range(self.max_retries):
try:
return sendWebexGET(url)
except urllib.error.HTTPError as e:
if e.code == 429 and attempt < self.max_retries - 1:
wait_time = int(e.headers.get('Retry-After', 60))
# Add jitter to prevent thundering herd
jitter = random.uniform(0.1, 0.3) * wait_time
time.sleep(wait_time + jitter)
else:
raise
Suggestions for enhancing this demo:
- Additional Error Types: Handle more HTTP status codes
- Multiple Endpoints: Test different API rate limits
- Metrics Collection: Track rate limit patterns
- Configuration Options: External token management
- Advanced Backoff: Exponential or jittered strategies
This demo is part of the Webex Samples collection and follows the same licensing terms.
For more information:
- Blog Post: Rate Limiting with the Webex API
- API Documentation: Webex Rate Limiting
- Developer Community: Webex Developer Portal
Made with β€οΈ by the Webex Developer Relations Team at Cisco
Note: This demo intentionally triggers rate limits for educational purposes. In production applications, implement proper request spacing and rate limit prevention strategies.