Documentation Index Fetch the complete documentation index at: https://concentrate.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Concentrate AI API uses standard HTTP status codes to indicate success or failure. All error responses include a JSON body with details about what went wrong.
All errors follow this structure:
{
"error" : "Error type or message" ,
"message" : "Detailed explanation (optional)" ,
"model" : "provider/model (for provider errors only)"
}
Status Codes
Status Code Error Type Description 200Success Request completed successfully 400Bad Request Invalid request parameters 401Unauthorized Missing or invalid API key 402Payment Required Insufficient credits 424Failed Dependency Provider unavailable 429Too Many Requests Rate limit exceeded 500Internal Server Error Server-side error
Error Types
400 Bad Request
Invalid or malformed request parameters.
{
"error" : "Bad Request" ,
"message" : "Invalid model name: 'invalid-model-xyz'"
}
Causes:
Model doesn’t exist
Typo in model name
Unsupported model
Solution:
{
"error" : "Bad Request" ,
"message" : "Missing required field: 'input'"
}
Causes:
Required parameter not provided
Empty or null value
Solution:
Include all required fields: model and input
Ensure values are not null or empty
{
"error" : "Bad Request" ,
"message" : "Invalid type for 'temperature': expected number, got string"
}
Causes:
Wrong data type for parameter
Invalid enum value
Solution:
Check parameter types in API reference
Use correct data types (string, number, boolean, etc.)
{
"error" : "Bad Request" ,
"message" : "temperature must be between 0 and 2, got 3.5"
}
Causes:
Value outside allowed range
Negative value for positive-only fields
Solution:
Review parameter constraints
temperature: 0.0 - 2.0
top_p: 0.0 - 1.0
max_output_tokens: > 0
401 Unauthorized
Authentication failed or API key is invalid.
{
"error" : "Unauthorized" ,
"message" : "Invalid API key"
}
Causes:
API key is missing
API key is invalid or revoked
Wrong header format
Solutions:
Check Header Format
Verify API Key
# Correct
Authorization: Bearer sk-cn-v1-abc123xyz789
# Incorrect
Authorization: sk-cn-v1-abc123xyz789 # Missing "Bearer"
Authorization: Bearer: sk-cn-v1-abc123xyz789 # Extra colon
402 Payment Required
Insufficient credits to complete the request.
{
"error" : "Insufficient funds" ,
"message" : "Your account has insufficient credits. Please add credits to continue."
}
Causes:
Account credit balance too low
Request would exceed credit limit
Free tier exhausted
Solutions:
Check your balance:
Visit dashboard
View credit usage and remaining balance
Add credits:
Purchase additional credits
Upgrade your plan
Optimize requests:
Reduce max_output_tokens
Use cost-optimized models
Enable auto routing with routing: { strategy: "min", metric: "cost" }
Handle Insufficient Credits
Graceful Degradation
import requests
def make_request_with_budget ( input_text ):
# Try with primary model
response = requests.post(
"https://api.concentrate.ai/v1/responses" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"model" : "gpt-5.2" ,
"input" : input_text,
"max_output_tokens" : 500
}
)
if response.status_code == 402 :
# Try a cheaper model
response = requests.post(
"https://api.concentrate.ai/v1/responses" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = {
"model" : "auto" ,
"input" : input_text,
"routing" : {
"strategy" : "min" ,
"metric" : "cost"
},
"max_output_tokens" : 300
}
)
return response.json()
424 Failed Dependency
The requested provider is unavailable.
{
"error" : "Model 'openai/gpt-5.2' Errored" ,
"message" : "Request to openai/gpt-5.2 failed because provider was unavailable" ,
"model" : "openai/gpt-5.2"
}
Causes:
Provider experiencing outage
Model temporarily unavailable
Regional restrictions
Solutions:
Retry with exponential backoff
Specify alternative provider
Retry with Backoff
Try Alternative Provider
import time
import requests
def request_with_retry ( payload , max_retries = 3 ):
for attempt in range (max_retries):
response = requests.post(
"https://api.concentrate.ai/v1/responses" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = payload
)
if response.status_code == 424 :
if attempt < max_retries - 1 :
# Exponential backoff: 1s, 2s, 4s
wait_time = 2 ** attempt
print ( f "Provider error, retrying in { wait_time } s..." )
time.sleep(wait_time)
continue
return response.json()
raise Exception ( "All retry attempts failed" )
Implement client-side retry logic or specify an alternative provider/model to handle provider unavailability.
429 Too Many Requests
Rate limit exceeded.
{
"error" : "Rate limit exceeded" ,
"message" : "Too many requests. Please retry after 60 seconds." ,
"retry_after" : 60
}
Causes:
Exceeded requests per minute limit
Too many tokens per minute
Burst limit exceeded
Solutions:
Implement rate limiting in your code
Use exponential backoff
Batch requests when possible
Upgrade plan for higher limits
Rate Limit Handling
Token Bucket
import time
import requests
def make_request_with_rate_limit ( payload ):
while True :
response = requests.post(
"https://api.concentrate.ai/v1/responses" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = payload
)
if response.status_code == 429 :
# Get retry-after header
retry_after = int (response.headers.get( 'Retry-After' , 60 ))
print ( f "Rate limited. Waiting { retry_after } s..." )
time.sleep(retry_after)
continue
return response.json()
500 Internal Server Error
Server-side error. These are rare and usually temporary.
{
"error" : "Internal server error" ,
"message" : "An unexpected error occurred. Please try again."
}
Causes:
Temporary server issue
Unexpected error condition
Any provider internal error (e.g., Cloudflare server outage)
Solutions:
Retry the request after a short delay
If persists, contact support
Best Practices
Implement comprehensive error handling
import requests
from typing import Optional
def make_safe_request ( payload : dict ) -> Optional[ dict ]:
try :
response = requests.post(
"https://api.concentrate.ai/v1/responses" ,
headers = { "Authorization" : "Bearer YOUR_API_KEY" },
json = payload,
timeout = 30
)
# Handle different status codes
if response.status_code == 200 :
return response.json()
elif response.status_code == 400 :
print ( f "Bad request: { response.json()[ 'message' ] } " )
elif response.status_code == 401 :
print ( "Authentication failed - check API key" )
elif response.status_code == 402 :
print ( "Insufficient credits - please add funds" )
elif response.status_code == 424 :
print ( f "Provider error: { response.json()[ 'message' ] } " )
# Implement retry or try alternative provider
elif response.status_code == 429 :
retry_after = int (response.headers.get( 'Retry-After' , 60 ))
print ( f "Rate limited - retry after { retry_after } s" )
elif response.status_code == 500 :
print ( "Server error - will retry" )
return None
except requests.exceptions.Timeout:
print ( "Request timed out" )
except requests.exceptions.ConnectionError:
print ( "Connection failed" )
except Exception as e:
print ( f "Unexpected error: { str (e) } " )
return None
async function makeRequestWithLogging ( payload ) {
const startTime = Date . now ();
try {
const response = await fetch ( "https://api.concentrate.ai/v1/responses" , {
method: "POST" ,
headers: {
"Authorization" : "Bearer YOUR_API_KEY" ,
"Content-Type" : "application/json"
},
body: JSON . stringify ( payload )
});
const data = await response . json ();
const duration = Date . now () - startTime ;
// Log all requests
console . log ({
timestamp: new Date (). toISOString (),
status: response . status ,
model: payload . model ,
duration ,
success: response . ok
});
if ( ! response . ok ) {
// Log error details
console . error ({
error: data . error ,
message: data . message ,
payload
});
}
return data ;
} catch ( error ) {
console . error ({
timestamp: new Date (). toISOString (),
error: error . message ,
payload
});
throw error ;
}
}
Use circuit breaker pattern
from datetime import datetime, timedelta
class CircuitBreaker :
def __init__ ( self , failure_threshold = 5 , timeout = 60 ):
self .failure_threshold = failure_threshold
self .timeout = timeout
self .failures = 0
self .last_failure_time = None
self .state = "closed" # closed, open, half-open
def call ( self , func , * args , ** kwargs ):
if self .state == "open" :
if datetime.now() - self .last_failure_time > timedelta( seconds = self .timeout):
self .state = "half-open"
else :
raise Exception ( "Circuit breaker is open" )
try :
result = func( * args, ** kwargs)
self .on_success()
return result
except Exception as e:
self .on_failure()
raise
def on_success ( self ):
self .failures = 0
self .state = "closed"
def on_failure ( self ):
self .failures += 1
self .last_failure_time = datetime.now()
if self .failures >= self .failure_threshold:
self .state = "open"
# Usage
breaker = CircuitBreaker()
try :
response = breaker.call(make_api_request, payload)
except Exception as e:
print ( f "Request failed: { e } " )
interface RequestPayload {
model : string ;
input : string | Message [];
temperature ?: number ;
max_output_tokens ?: number ;
}
function validatePayload ( payload : RequestPayload ) : string [] {
const errors : string [] = [];
if ( ! payload . model ) {
errors . push ( "model is required" );
}
if ( ! payload . input ) {
errors . push ( "input is required" );
}
if ( payload . temperature !== undefined ) {
if ( payload . temperature < 0 || payload . temperature > 2 ) {
errors . push ( "temperature must be between 0 and 2" );
}
}
if ( payload . max_output_tokens !== undefined ) {
if ( payload . max_output_tokens < 1 ) {
errors . push ( "max_output_tokens must be positive" );
}
}
return errors ;
}
// Usage
const payload = { model: "gpt-5.2" , input: "Hello" };
const errors = validatePayload ( payload );
if ( errors . length > 0 ) {
console . error ( "Validation errors:" , errors );
} else {
// Make request
}
Error Monitoring
Track and analyze errors in production:
Error Metrics
Error Analytics
from collections import Counter
from datetime import datetime
class ErrorTracker :
def __init__ ( self ):
self .errors = []
def log_error ( self , status_code , error , message ):
self .errors.append({
"timestamp" : datetime.now(),
"status_code" : status_code,
"error" : error,
"message" : message
})
def get_error_summary ( self ):
status_counts = Counter(e[ "status_code" ] for e in self .errors)
error_counts = Counter(e[ "error" ] for e in self .errors)
return {
"total_errors" : len ( self .errors),
"by_status" : dict (status_counts),
"by_type" : dict (error_counts)
}
tracker = ErrorTracker()
# Log errors
if response.status_code != 200 :
data = response.json()
tracker.log_error(
response.status_code,
data.get( "error" ),
data.get( "message" )
)
# Get summary
print (tracker.get_error_summary())
Debugging Checklist
When encountering errors, check:
Create Response Main endpoint documentation
Auto Routing Automatic provider/model routing
Rate Limits Understanding rate limits
Support Contact support for help