Quick Start Guide
Get up and running with OpenRouterX API in minutes. Our API is fully compatible with OpenAI's interface.
Get your API key
Sign up and create an API key from your dashboard.
export OPENROUTERX_API_KEY="your_api_key_here"
Install the client
Use our official Python client or any OpenAI-compatible library.
pip install openai # OpenAI client works with our API
Make your first request
Send a chat completion request to Claude Code or Gemini Pro.
import openai
client = openai.OpenAI(
api_key="your_api_key_here",
base_url="https://api.openrouterx.com/v1"
)
response = client.chat.completions.create(
model="claude-code",
messages=[
{"role": "user", "content": "Write a Python function to calculate fibonacci"}
]
)
print(response.choices[0].message.content)
Authentication
OpenRouterX uses API keys for authentication. Include your API key in the Authorization header.
Authorization: Bearer your_api_key_here
Never expose your API key in client-side code. Use environment variables or secure key management systems.
Making Requests
All API requests should be made to https://api.openrouterx.com/v1
with proper authentication.
Base URL
https://api.openrouterx.com/v1
Request Headers
Authorization: Bearer your_api_key_here
Content-Type: application/json
Chat Completions
Create a chat completion using Claude Code or Gemini Pro models.
Endpoint
POST https://api.openrouterx.com/v1/chat/completions
Request Body
{
"model": "claude-code",
"messages": [
{
"role": "system",
"content": "You are a helpful coding assistant."
},
{
"role": "user",
"content": "Write a Python function to reverse a string"
}
],
"max_tokens": 1000,
"temperature": 0.7
}
Parameters
Available Models
OpenRouterX provides access to premium AI models optimized for different use cases.
Claude Code
claude-codeGemini Pro
gemini-proPython Examples
Basic Chat Completion
import openai
import os
client = openai.OpenAI(
api_key=os.getenv("OPENROUTERX_API_KEY"),
base_url="https://api.openrouterx.com/v1"
)
# Using Claude Code for programming tasks
response = client.chat.completions.create(
model="claude-code",
messages=[
{
"role": "system",
"content": "You are an expert Python developer."
},
{
"role": "user",
"content": "Create a class for a binary search tree with insert and search methods."
}
],
max_tokens=1500,
temperature=0.2
)
print(response.choices[0].message.content)
Streaming Response
stream = client.chat.completions.create(
model="gemini-pro",
messages=[
{"role": "user", "content": "Explain quantum computing"}
],
stream=True
)
for chunk in stream:
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content, end="")
JavaScript Examples
import OpenAI from 'openai';
const client = new OpenAI({
apiKey: process.env.OPENROUTERX_API_KEY,
baseURL: 'https://api.openrouterx.com/v1'
});
async function generateCode() {
const response = await client.chat.completions.create({
model: 'claude-code',
messages: [
{
role: 'user',
content: 'Write a JavaScript function to debounce user input'
}
],
max_tokens: 1000
});
console.log(response.choices[0].message.content);
}
generateCode();
cURL Examples
curl -X POST https://api.openrouterx.com/v1/chat/completions \
-H "Authorization: Bearer $OPENROUTERX_API_KEY" \
-H "Content-Type: application/json" \
-d '{
"model": "claude-code",
"messages": [
{
"role": "user",
"content": "Write a bash script to backup a directory"
}
],
"max_tokens": 1000
}'
Rate Limits
API rate limits vary by plan and model. Check response headers for current limits.
Error Handling
The API returns standard HTTP status codes and JSON error responses.