Documentation Index Fetch the complete documentation index at: https://concentrate.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
Concentrate provides access to 115+ models from 16 authors across 14 providers through a single unified API. Use any model slug below in the model field of your request.
{
"model" : "claude-opus-4-6" ,
"input" : "Hello, world!"
}
You can also pin a specific provider using the provider/model format:
{
"model" : "anthropic/claude-opus-4-6" ,
"input" : "Hello, world!"
}
Pricing, context windows, and provider availability are kept current in the Model Fortress dashboard. You can also query the List Models endpoint for real-time data.
Models by Author
OpenAI
Anthropic
Google
xAI
Meta
Mistral
Alibaba
More
OpenAI (22 models) Frontier Model Slug Name Providers Context Max Output gpt-5.4GPT 5.4 openai, azure 1,050,000 128,000 gpt-5.4-proGPT 5.4 Pro openai, azure 1,050,000 128,000 gpt-5.4-miniGPT 5.4 Mini openai 128,000 128,000 gpt-5.4-nanoGPT 5.4 Nano openai 128,000 128,000 gpt-5.2GPT 5.2 openai 400,000 128,000 gpt-5.1GPT 5.1 openai 400,000 128,000 gpt-5GPT 5 openai, azure 400,000 128,000 gpt-5-miniGPT 5 Mini openai 400,000 128,000 gpt-5-nanoGPT 5 Nano openai 400,000 128,000
Codex (Agentic Coding) Model Slug Name Providers Context Max Output gpt-5.3-codexGPT 5.3 Codex openai, azure 400,000 128,000 gpt-5.2-codexGPT 5.2 Codex openai 400,000 128,000 gpt-5.1-codex-maxGPT 5.1 Codex Max openai 400,000 128,000 gpt-5.1-codex-miniGPT 5.1 Codex Mini openai 400,000 128,000
Reasoning Model Slug Name Providers Context Max Output o1OpenAI o1 openai 200,000 100,000
Previous Generation Model Slug Name Providers Context Max Output gpt-4.1GPT 4.1 openai 1,047,576 32,768 gpt-4.1-miniGPT 4.1 Mini openai 1,047,576 32,768 gpt-4oGPT 4o openai, azure 128,000 16,384 gpt-4o-miniGPT 4o Mini openai 128,000 16,384
Open-Weight (GPT-OSS) Model Slug Name Providers Context Max Output gpt-oss-120bGPT-OSS 120B huggingface, azure, cloudflare, bedrock, bluelobster 131,072 128,000 gpt-oss-20bGPT-OSS 20B cloudflare, bedrock, bluelobster 131,072 128,000 gpt-oss-safeguard-120bGPT-OSS Safeguard 120B bedrock 128,000 128,000 gpt-oss-safeguard-20bGPT-OSS Safeguard 20B bedrock 128,000 128,000
Anthropic (9 models) All Anthropic models support Zero Data Retention when accessed via the Anthropic provider. Current Generation Model Slug Name Providers Context Max Output ZDR claude-opus-4-6Claude Opus 4.6 anthropic, vertex, azure, bedrock 200,000 128,000 Yes claude-sonnet-4-6Claude Sonnet 4.6 anthropic, vertex, azure, bedrock 200,000 64,000 Yes claude-opus-4-5Claude Opus 4.5 anthropic, azure, bedrock 200,000 64,000 Yes claude-sonnet-4-5Claude Sonnet 4.5 anthropic, azure, bedrock 200,000 64,000 Yes claude-haiku-4-5Claude Haiku 4.5 anthropic, azure, bedrock 200,000 64,000 Yes
Previous Generation Model Slug Name Providers Context Max Output ZDR claude-opus-4-1Claude Opus 4.1 anthropic, bedrock 200,000 32,000 Yes claude-sonnet-4Claude Sonnet 4 anthropic, bedrock 200,000 64,000 Yes claude-opus-4Claude Opus 4 anthropic 200,000 32,000 Yes claude-haiku-3Claude Haiku 3 anthropic, bedrock 200,000 4,096 Yes
Google (9 models) Gemini Model Slug Name Providers Context Max Output gemini-3.1-pro-previewGemini 3.1 Pro Preview vertex 1,000,000 65,536 gemini-3.1-flash-lite-previewGemini 3.1 Flash Lite Preview vertex, ai-studio 1,048,576 65,536 gemini-3-flash-previewGemini 3 Flash Preview vertex, ai-studio 1,000,000 65,536 gemini-2.5-proGemini 2.5 Pro vertex 1,000,000 65,536 gemini-2.5-flashGemini 2.5 Flash vertex 1,000,000 65,536 gemini-2.0-flashGemini 2.0 Flash vertex 1,048,576 8,192
Gemma (Open-Weight) Model Slug Name Providers Context Max Output gemma-3-27bGemma 3 27B bedrock 128,000 8,192 gemma-3-12bGemma 3 12B cloudflare, bedrock 128,000 8,192 gemma-3-4bGemma 3 4B IT bedrock 128,000 8,192
xAI (12 models) Grok 4.20 Model Slug Name Providers Context Max Output grok-4.20-multi-agent-0309Grok 4.20 Multi-Agent xai 2,000,000 131,072 grok-4.20-0309-reasoningGrok 4.20 Reasoning xai 2,000,000 131,072 grok-4.20-0309-non-reasoningGrok 4.20 Non Reasoning xai 2,000,000 131,072
Grok 4 & 4.1 Model Slug Name Providers Context Max Output grok-4Grok 4 xai 256,000 256,000 grok-4-0709Grok 4 (0709) xai 256,000 131,072 grok-4-1-fast-reasoningGrok 4.1 Fast Reasoning xai 2,000,000 131,072 grok-4-1-fast-non-reasoningGrok 4.1 Fast xai 2,000,000 131,072 grok-4-fast-reasoningGrok 4 Fast Reasoning xai 2,000,000 131,072 grok-4-fast-non-reasoningGrok 4 Fast xai 2,000,000 131,072
Grok 3 Model Slug Name Providers Context Max Output grok-3Grok 3 xai 131,072 131,072 grok-3-miniGrok 3 Mini xai 131,072 131,072
Code Model Slug Name Providers Context Max Output grok-code-fast-1Grok Code Fast xai 256,000 131,072
Llama 4 Model Slug Name Providers Context Max Output llama-4-maverickLlama 4 Maverick bedrock 1,048,576 8,192 llama-4-scoutLlama 4 Scout cloudflare, bedrock 131,000 8,192
Llama 3.x Model Slug Name Providers Context Max Output llama-3.3-70b-instructLlama 3.3 70B Instruct cloudflare, bedrock 128,000 8,192 llama-3.2-90b-instructLlama 3.2 90B Instruct bedrock 128,000 8,192 llama-3.2-11b-instructLlama 3.2 11B Instruct bedrock 128,000 8,192 llama-3.2-3b-instructLlama 3.2 3B Instruct cloudflare, bedrock 128,000 8,192 llama-3.2-1b-instructLlama 3.2 1B Instruct huggingface, cloudflare, bedrock 131,000 8,192 llama-3.1-70b-instructLlama 3.1 70B Instruct bedrock 128,000 8,192 llama-3.1-8b-instructLlama 3.1 8B Instruct cloudflare, bedrock 128,000 8,192
Llama 3 Model Slug Name Providers Context Max Output llama-3-70b-instructLlama 3 70B Instruct bedrock 8,000 2,048 llama-3-8b-instructLlama 3 8B Instruct bedrock 8,000 2,048
Mistral (13 models) Frontier Model Slug Name Providers Context Max Output mistral-large-3Mistral Large 3 mistral, bedrock 256,000 32,000 mistral-medium-3.1Mistral Medium 3.1 mistral 128,000 32,000 mistral-medium-3Mistral Medium 3 mistral 128,000 32,000
Magistral (Reasoning) Model Slug Name Providers Context Max Output magistral-medium-1.2Magistral Medium 1.2 mistral 128,000 128,000 magistral-small-1.2Magistral Small 1.2 mistral, bedrock 128,000 128,000
Code Model Slug Name Providers Context Max Output codestralCodestral mistral 128,000 32,000 devstral-2Devstral 2 mistral 256,000 32,000
Small & Efficient Model Slug Name Providers Context Max Output mistral-small-3.2Mistral Small mistral 128,000 32,000 mistral-small-3.1Mistral Small 3.1 cloudflare 128,000 8,192 mistral-nemoMistral Nemo mistral 128,000 32,000 ministral-3-14bMinistral 3 14B mistral, bedrock 256,000 32,000 ministral-3-8bMinistral 3 8B mistral, bedrock 256,000 32,000 ministral-3-3bMinistral 3 3B mistral, bedrock 256,000 32,000
Alibaba Cloud (8 models) Qwen 3.5 Model Slug Name Providers Context Max Output qwen3.5-27bQwen3.5 27B huggingface 262,144 65,536
Qwen 3 Model Slug Name Providers Context Max Output qwen3-32bQwen3 32B bedrock 32,000 8,192 qwen3-30bQwen3 30B cloudflare 32,768 8,192 qwen3-coder-nextQwen3 Coder Next bedrock 256,000 65,536 qwen3-coder-30b-a3bQwen3 Coder 30B A3B bedrock 256,000 65,536 qwen3-next-80b-a3bQwen3 Next 80B A3B bedrock 128,000 8,192 qwen3-vl-235b-a22bQwen3 VL 235B A22B bedrock 128,000 8,192
QwQ (Reasoning) Model Slug Name Providers Context Max Output qwq-32bQwQ 32B cloudflare 24,000 16,384
DeepSeek (3 models) Model Slug Name Providers Context Max Output deepseek-r1DeepSeek R1 bedrock 128,000 32,768 deepseek-v3-2DeepSeek V3.2 bedrock 128,000 16,384 deepseek-r1-distill-32bDeepSeek R1 Distill 32B cloudflare 80,000 16,384
z.ai (7 models) Model Slug Name Providers Context Max Output glm-5GLM-5 vertex, zai 200,000 128,000 glm-4.7GLM-4.7 zai, bedrock 200,000 128,000 glm-4.7-flashGLM-4.7 Flash bedrock 200,000 128,000 glm-4.6GLM-4.6 zai 200,000 128,000 glm-4.6vGLM-4.6v zai 131,072 32,768 glm-4.5GLM-4.5 zai 128,000 96,000 glm-4.5vGLM-4.5v zai 131,072 16,384
Amazon (5 models) Model Slug Name Providers Context Max Output nova-premierAmazon Nova Premier bedrock 1,000,000 20,000 nova-proAmazon Nova Pro bedrock 300,000 5,000 nova-liteAmazon Nova Lite bedrock 300,000 5,000 nova-2-liteAmazon Nova 2 Lite bedrock 256,000 5,000 nova-microAmazon Nova Micro bedrock 128,000 5,000
MiniMax (7 models) Model Slug Name Providers Context Max Output minimax-m2-7MiniMax M2.7 minimax 204,800 131,072 minimax-m2-7-highspeedMiniMax M2.7 Highspeed minimax 204,800 131,072 minimax-m2-5MiniMax M2.5 minimax 204,800 8,192 minimax-m2-5-highspeedMiniMax M2.5 Highspeed minimax 204,800 8,192 minimax-m2-1MiniMax M2.1 minimax, bedrock 204,800 8,192 minimax-m2-1-highspeedMiniMax M2.1 Highspeed minimax 204,800 8,192 minimax-m2MiniMax M2 minimax, bedrock 204,800 8,192
Cohere (2 models) Model Slug Name Providers Context Max Output command-aCommand A cohere 256,000 8,192 command-a-visionCommand A Vision cohere 128,000 8,000
Moonshot AI (2 models) Model Slug Name Providers Context Max Output kimi-k2-thinkingKimi K2 Thinking bedrock 256,000 65,535 kimi-k2-5Kimi K2.5 bedrock 256,000 33,000
Writer (2 models) Model Slug Name Providers Context Max Output palmyra-x5Palmyra X5 bedrock 128,000 8,192 palmyra-x4Palmyra X4 bedrock 128,000 8,192
AI21 Labs (2 models) Model Slug Name Providers Context Max Output jamba-1-5-largeJamba 1.5 Large bedrock 256,000 4,096 jamba-1-5-miniJamba 1.5 Mini bedrock 256,000 4,096
IBM (1 model) Model Slug Name Providers Context Max Output ibm-granite-microIBM Granite Micro cloudflare 131,000 4,096
Providers
Models are available across 14 providers . The same model may be offered by multiple providers with different pricing, latency, and feature support.
Provider Slug Description OpenAI openaiDirect OpenAI API access Anthropic anthropicDirect Anthropic API access (ZDR available) Azure azureMicrosoft Azure OpenAI Service AWS Bedrock bedrockAmazon Bedrock managed inference Google Vertex AI vertexGoogle Cloud Vertex AI Google AI Studio ai-studioGoogle AI Studio xAI xaixAI direct API access Cohere cohereCohere direct API access Mistral mistralMistral AI direct API access Cloudflare cloudflareCloudflare Workers AI Hugging Face huggingfaceHugging Face Inference API z.ai zaiZhipu AI direct API access MiniMax minimaxMiniMax direct API access Blue Lobster bluelobsterBlue Lobster inference
Model Selection
There are three ways to specify which model to use:
Method Format Example Behavior Model slug "model-slug""claude-opus-4-6"Auto-routes to the best provider Provider-pinned "provider/model-slug""anthropic/claude-opus-4-6"Uses specific provider, falls back to others Auto "auto""auto"System selects optimal model and provider
See Routing for full details on provider selection, fallback, and optimization metrics.
Querying Models Programmatically
Use the Models API to get real-time model data including current pricing:
# List all models
curl https://api.concentrate.ai/v1/models \
-H "Authorization: Bearer YOUR_API_KEY"
# Get a specific model
curl https://api.concentrate.ai/v1/models/claude-opus-4-6 \
-H "Authorization: Bearer YOUR_API_KEY"
# List models by author
curl https://api.concentrate.ai/v1/models/authors/anthropic \
-H "Authorization: Bearer YOUR_API_KEY"
# List models by provider
curl https://api.concentrate.ai/v1/models/providers/bedrock/models \
-H "Authorization: Bearer YOUR_API_KEY"
List Models Full API reference for querying models
Routing How provider selection and fallback works
Zero Data Retention ZDR-certified models and providers
Quickstart Make your first API call