Documentation Index
Fetch the complete documentation index at: https://concentrate.ai/docs/llms.txt
Use this file to discover all available pages before exploring further.
Overview
The Concentrate AI API supports multi-modal inputs, allowing you to send images alongside text to vision-capable models. Images can be provided as base64 data URIs or public URLs, and the API normalizes the format across all providers automatically.Supported Models
The following models support image inputs:| Provider | Models |
|---|---|
| OpenAI | GPT-5.2, GPT-5.1, GPT-5, GPT-5 Mini, GPT-5 Nano, GPT-4.1, GPT-4o |
| Anthropic | Claude Opus 4.6, Claude Opus 4.5, Claude Sonnet 4.5, Claude Sonnet 4, Claude Sonnet 3.7 |
| Google Vertex | Gemini 3 Pro, Gemini 3 Flash, Gemini 2.5 Pro, Gemini 2.5 Flash |
| Mistral | Pixtral Large, Mistral Medium, Mistral Small, Magistral Medium |
| Cohere | Command A Vision |
| AWS Bedrock | Claude models (via Bedrock), OpenAI models (via Bedrock) |
| Azure | GPT-5, GPT-4o, Claude models (via Azure) |
| Z.AI | GLM-4.6V, GLM-4.5V |
Use the Get Model endpoint to check if a specific model supports image inputs by looking for the
image_processing field in the provider configuration.Sending Images
Images are sent as content blocks within theinput array. Use the input_image content type alongside input_text blocks.
Image Input Format
type(required):"input_image"image_url(required): Base64 data URI or public HTTPS URLdetail(optional):"low","high", or"auto"(default:"auto")low: Faster processing, lower token cost, suitable for simple imageshigh: Full resolution analysis, higher token cost, better for detailed imagesauto: Let the model decide based on the image
detail Parameter Support
The detail parameter controls image resolution for token estimation, but provider support varies depending on the underlying API format:
| Provider | Accepts Detail | Notes |
|---|---|---|
| OpenAI | Yes | Passed through via the Responses API format |
| xAI | Yes | Passed through via the Responses API format |
| Cohere | Yes | Explicitly mapped to the Cohere API |
| Azure | Depends | Forwarded for OpenAI-format models; not applicable for Anthropic-format models |
| Anthropic | N/A | Anthropic’s API does not have a detail parameter |
| Google Vertex | N/A | Gemini’s API does not have a detail parameter |
| AWS Bedrock | N/A | Bedrock’s native image format does not have a detail parameter |
| Mistral | Yes | Mistral’s Conversations API does not have a detail parameter |
| Z.AI | N/A | Z.AI does not have a detail parameter |
Examples
Base64 Image
URL Image
You can also pass a publicly accessible image URL:Multiple Images
Send multiple images in a single request:With Streaming
Multi-modal requests work with streaming:Supported Formats
| Format | MIME Type | Data URI Prefix |
|---|---|---|
| PNG | image/png | data:image/png;base64, |
| JPEG | image/jpeg | data:image/jpeg;base64, |
| GIF | image/gif | data:image/gif;base64, |
| WebP | image/webp | data:image/webp;base64, |
Limits
Image limits vary by provider and model. Exceeding these limits will return a400 error.
| Provider | Max Images Per Request | Max Total Size |
|---|---|---|
| OpenAI (GPT-5, GPT-4o) | 500 | 50 MB |
| Anthropic (Claude) | 100 | 32 MB |
| Google Vertex (Gemini 3) | 900 | 7 MB |
| Google Vertex (Gemini 2.5) | 3,000 | 7 MB |
| Cohere (Command A Vision) | 20 | 20 MB |
| Mistral (Pixtral Large) | 8 | 10 MB |
Image tokens are calculated based on per provider algorithms. Higher resolution images consume more tokens. For providers that support it, consider setting
detail to “low” to reduce costs.Error Handling
Common errors when using image inputs:| Error | Cause |
|---|---|
Model does not support image inputs | The selected model does not have vision capabilities |
Too many images | Request exceeds the model’s max_images_per_request limit |
Image size exceeds limit | Total image data exceeds the model’s max_total size limit |
Invalid image format | Image is not PNG, JPEG, GIF, or WebP |
Invalid image URL | URL is not a valid HTTP/HTTPS URL or data URI |
Related Documentation
Request Parameters
Complete parameter reference
Streaming
Use multi-modal with streaming
Create Response
Main endpoint documentation
List Models
Check model capabilities