A robust Node.js proxy server that automatically rotates API keys for Gemini and OpenAI APIs when rate limits (429 errors) are encountered. Built with zero dependencies and comprehensive logging.
- π Automatic Key Rotation: Seamlessly switches to the next API key on 429 errors
- π― Multi-API Support: Works with both Gemini and OpenAI APIs simultaneously
- π§ Flexible Base URLs: Use custom endpoints or default API servers
- π Detailed Logging: Track every request, rotation, and error with masked API keys
- π Zero Dependencies: Pure Node.js with no external packages
- π File Upload Support: Handles multipart/form-data and binary uploads
- π‘οΈ Error Handling: Proper error responses and graceful failures
git clone https://github.com/p32929/openai-gemini-api-key-rotator.git
cd openai-gemini-api-key-rotator
Copy the example environment file and add your API keys:
cp .env.example .env
Edit .env
:
# Required
PORT=3000
# At least one of these is required
GEMINI_API_KEYS=AIzaSyABC123...,AIzaSyDEF456...,AIzaSyGHI789...
OPENAI_API_KEYS=sk-proj-abc123...,sk-proj-def456...,sk-proj-ghi789...
# Optional - Custom base URL for all API calls
# BASE_URL=https://your-custom-server.com
npm start
You'll see output like:
[CONFIG] Port: 3000
[CONFIG] Using default API endpoints
[CONFIG] Found 3 Gemini API keys
[CONFIG] Found 2 OpenAI API keys
[GEMINI-ROTATOR] Initialized with 3 API keys
[OPENAI-ROTATOR] Initialized with 2 API keys
Multi-API proxy server running on port 3000
Available Gemini API keys: 3
Gemini endpoints: /gemini/v1/* and /gemini/v1beta/*
Available OpenAI API keys: 2
OpenAI endpoints: /openai/v1/*
API | Endpoint Pattern | Example |
---|---|---|
Gemini | /gemini/v1/* |
/gemini/v1/models/gemini-pro:generateContent |
Gemini Beta | /gemini/v1beta/* |
/gemini/v1beta/models/gemini-pro:generateContent |
OpenAI | /openai/v1/* |
/openai/v1/chat/completions |
curl -X POST "http://localhost:3000/gemini/v1/models/gemini-2.5-pro:generateContent" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [{
"text": "Write a short poem about API key rotation"
}]
}]
}'
curl -X POST "http://localhost:3000/gemini/v1/models/gemini-2.5-pro:generateContent" \
-H "Content-Type: application/json" \
-d '{
"system_instruction": {
"parts": [{
"text": "You are a helpful coding assistant."
}]
},
"contents": [{
"parts": [{
"text": "Explain what API rate limiting is"
}]
}],
"generationConfig": {
"temperature": 0.7,
"maxOutputTokens": 100
}
}'
curl -X GET "http://localhost:3000/gemini/v1/models"
curl -X GET "http://localhost:3000/gemini/v1/models/gemini-2.5-pro"
curl -X POST "http://localhost:3000/gemini/v1/models/gemini-2.5-pro:generateContent" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [
{
"text": "What do you see in this image?"
},
{
"inline_data": {
"mime_type": "image/jpeg",
"data": "base64_encoded_image_data_here"
}
}
]
}]
}'
# First, encode your image to base64
IMAGE_DATA=$(base64 -i path/to/your/image.jpg)
curl -X POST "http://localhost:3000/gemini/v1/models/gemini-2.5-pro:generateContent" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [
{
"text": "Describe this image in detail"
},
{
"inline_data": {
"mime_type": "image/jpeg",
"data": "'$IMAGE_DATA'"
}
}
]
}]
}'
curl -X POST "http://localhost:3000/gemini/v1/models/gemini-2.5-pro:generateContent" \
-H "Content-Type: application/json" \
-d '{
"contents": [{
"parts": [
{
"text": "Compare these two images and tell me the differences"
},
{
"inline_data": {
"mime_type": "image/jpeg",
"data": "base64_image1_data_here"
}
},
{
"inline_data": {
"mime_type": "image/png",
"data": "base64_image2_data_here"
}
}
]
}]
}'
curl -X POST "http://localhost:3000/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Explain API key rotation in simple terms"
}
],
"max_tokens": 150,
"temperature": 0.7
}'
curl -X POST "http://localhost:3000/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": "Write a haiku about programming"
}
],
"stream": true
}'
curl -X GET "http://localhost:3000/openai/v1/models"
curl -X POST "http://localhost:3000/openai/v1/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"prompt": "The benefits of API key rotation are:",
"max_tokens": 100,
"temperature": 0.5
}'
curl -X POST "http://localhost:3000/openai/v1/embeddings" \
-H "Content-Type: application/json" \
-d '{
"model": "text-embedding-ada-002",
"input": "API key rotation helps maintain service availability"
}'
curl -X POST "http://localhost:3000/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "What do you see in this image?"
},
{
"type": "image_url",
"image_url": {
"url": "https://example.com/image.jpg"
}
}
]
}
],
"max_tokens": 300
}'
# First, encode your image to base64
IMAGE_DATA=$(base64 -i path/to/your/image.jpg)
curl -X POST "http://localhost:3000/openai/v1/chat/completions" \
-H "Content-Type: application/json" \
-d '{
"model": "gpt-4o-mini",
"messages": [
{
"role": "user",
"content": [
{
"type": "text",
"text": "Describe this image in detail"
},
{
"type": "image_url",
"image_url": {
"url": "data:image/jpeg;base64,'$IMAGE_DATA'"
}
}
]
}
],
"max_tokens": 500
}'
curl -X POST "http://localhost:3000/openai/v1/files" \
-H "Content-Type: multipart/form-data" \
-F "file=@path/to/your/document.pdf" \
-F "purpose=assistants"
curl -X POST "http://localhost:3000/openai/v1/audio/transcriptions" \
-H "Content-Type: multipart/form-data" \
-F "file=@path/to/your/audio.mp3" \
-F "model=whisper-1"
curl -X POST "http://localhost:3000/openai/v1/audio/translations" \
-H "Content-Type: multipart/form-data" \
-F "file=@path/to/your/audio.mp3" \
-F "model=whisper-1"
curl -X POST "http://localhost:3000/openai/v1/audio/speech" \
-H "Content-Type: application/json" \
-d '{
"model": "tts-1",
"input": "Hello, this is a test of the text-to-speech API",
"voice": "alloy"
}' \
--output speech.mp3
Variable | Required | Description | Example |
---|---|---|---|
PORT |
β Yes | Server port | 3000 |
GEMINI_API_KEYS |
πΆ Optional* | Comma-separated Gemini API keys | AIza...,AIza... |
OPENAI_API_KEYS |
πΆ Optional* | Comma-separated OpenAI API keys | sk-proj-...,sk-proj-... |
BASE_URL |
β No | Custom base URL for all APIs | https://api.example.com |
*At least one API key type is required
# OpenRouter (supports 100+ models including Claude, GPT-4, Llama, etc.)
OPENAI_BASE_URL=https://openrouter.ai/api
OPENAI_API_KEYS=sk-or-v1-your-key-here
# Groq (ultra-fast inference for Llama, Mixtral, Gemma models)
OPENAI_BASE_URL=https://api.groq.com/openai
OPENAI_API_KEYS=gsk_your-groq-key-here
# Together AI (open source models)
OPENAI_BASE_URL=https://api.together.xyz
OPENAI_API_KEYS=your-together-key-here
# Anthropic Claude (direct)
OPENAI_BASE_URL=https://api.anthropic.com
OPENAI_API_KEYS=sk-ant-your-key-here
# Use custom proxy or local server
OPENAI_BASE_URL=https://your-proxy-server.com
# OPENAI_BASE_URL=http://localhost:8080
# Use default endpoints (OpenAI official)
# OPENAI_BASE_URL=
The server provides detailed logging for monitoring and debugging:
[CONFIG] Loading configuration from /path/to/.env
[CONFIG] Port: 3000
[CONFIG] Using default API endpoints
[CONFIG] Found 2 Gemini API keys
[CONFIG] Found 3 OpenAI API keys
[CONFIG] Gemini Key 1: [AIza...1234]
[CONFIG] Gemini Key 2: [AIza...5678]
[CONFIG] OpenAI Key 1: [sk-p...9012]
[CONFIG] OpenAI Key 2: [sk-p...3456]
[CONFIG] OpenAI Key 3: [sk-p...7890]
[GEMINI-ROTATOR] Initialized with 2 API keys
[OPENAI-ROTATOR] Initialized with 3 API keys
[INIT] Gemini client initialized
[INIT] OpenAI client initialized
[REQ-abc123def] POST /openai/v1/chat/completions from 127.0.0.1
[REQ-abc123def] Proxying to OPENAI: /v1/chat/completions
[OPENAI::sk-p...9012] Currently active API key (1/3)
[OPENAI::sk-p...9012] Attempting POST /v1/chat/completions (attempt 1)
[OPENAI::sk-p...9012] Rate limited (429) - rotating to next key
[OPENAI::sk-p...9012] Key marked as failed (1/3 failed)
[OPENAI-ROTATOR] Rotated from index 0 to 1 -> [OPENAI::sk-p...3456]
[OPENAI::sk-p...3456] Currently active API key (2/3)
[OPENAI::sk-p...3456] Attempting POST /v1/chat/completions (attempt 2)
[OPENAI::sk-p...3456] Success (200)
[REQ-abc123def] Response: 200
- Request Routing: Incoming requests are routed based on URL patterns
- Key Selection: The first available API key is selected
- API Call: Request is forwarded to the appropriate API endpoint
- Error Handling:
- 429 (Rate Limited): Automatically rotates to next key and retries
- Other Errors: Returns the original error response
- Exhaustion: When all keys are rate-limited, returns 429 with descriptive error
openai-gemini-api-key-rotator/
βββ src/
β βββ config.js # Environment configuration
β βββ keyRotator.js # API key rotation logic
β βββ geminiClient.js # Gemini API client
β βββ openaiClient.js # OpenAI API client
β βββ server.js # HTTP proxy server
βββ index.js # Main entry point
βββ package.json # Node.js dependencies
βββ .env.example # Environment template
βββ README.md # This file
Made with β€οΈ for developers who hate rate limits