Learn how the platform selects AI models and how to customize model behavior in your apps.
You don't need to configure anything. When your app makes an AI request, the platform automatically picks the best available model based on the user's subscription plan.
Get the standard tier model by default. These models are fast, capable, and cost-effective for most tasks.
Get the advanced tier model by default. These are the most powerful models available, with the highest quality output.
In the default (automatic) mode, the platform gracefully handles capability mismatches. If the primary model doesn't support a feature your app needs (for example, a specific attachment type or web search), the platform automatically tries other models in the same tier before falling back to a lower tier. Your user's request still succeeds — they never see an error.
This fallback only applies when you let the platform choose the model. If you explicitly set a model, provider, or tier constraint, the platform respects that choice exactly — and returns an error if the selected model can't handle the request. See Customizing AI behavior below.
Every AI model on the platform belongs to one of three tiers. Tiers reflect the balance between speed, cost, and quality.
The fastest and cheapest models. Great for simple tasks like formatting, classification, or quick summaries.
Available to all users
Balanced models that handle most tasks well. This is the default for most users and covers the vast majority of use cases.
Available to all users
The most capable models with the highest quality output. Best for complex reasoning, creative writing, and demanding tasks.
Available to Super subscribers
If the defaults don't fit your use case, you have three options to control which model your app uses. These can be set at the hook level (applies to all requests) or per individual request.
model, modelProvider, or modelTier, the platform uses exactly what you asked for. If that model can't handle the request (e.g., it doesn't support the required capabilities), the request will fail with an error instead of silently falling back to another model.modelTierRequest a specific quality level without locking into a particular model. The platform picks the best model within that tier.
modelProviderPrefer a specific AI provider. The platform will choose the best model from that provider within the user's available tier.
modelUse an exact model by name. This is the most specific option and gives you full control, but it tightly couples your app to that model.
modelTier or modelProvider is more future-proof.All three options can also be passed per request, overriding the hook-level setting for that specific call.
Reasoning effort controls how much "thinking" a text model does before responding. Higher effort produces more thoughtful, accurate answers but takes longer and uses more credits. This option only applies to text models (useAIChat, useAIText, useAIObject) and has no effect on image, video, or speech generation.
minimal
Fastest. Best for trivial tasks like formatting or classification.
low
Light reasoning. Good for straightforward questions and simple generation.
medium
Balanced. The default for most models. Handles most tasks well.
high
Maximum depth. Best for complex analysis, math, or multi-step reasoning.
Here are all the models currently available on the platform, grouped by modality.
Used by useAIChat, useAIText, and useAIObject for conversations, text generation, and structured data extraction.
| Model | Provider | Tier | Description |
|---|---|---|---|
| Gemini 3.1 Flash Lite | Lite | Google's most efficient model. Ultra-fast and cost-effective. | |
| GPT-5.4 Nano | OpenAI | Lite | OpenAI's fastest and most affordable GPT-5.4 variant for lightweight tasks. |
| Gemini 3 Flash | Standard | Google's most balanced model. Quick, accurate, affordable. | |
| GPT-5.4 Mini | OpenAI | Standard | A faster, more cost-efficient version of GPT-5.4 for well-defined tasks. |
| Claude Haiku 4.5 | Anthropic | Standard | Anthropic's fastest model with near-frontier intelligence. |
| Gemini 3.1 Pro | Advanced | Google's best model with world-class multimodal understanding. | |
| GPT-5.4 | OpenAI | Advanced | OpenAI's latest model for coding and agentic tasks across industries. |
| Claude Sonnet 4.6 | Anthropic | Advanced | Anthropic's mid tier. Strong accuracy, fast. |
| Claude Opus 4.6 | Anthropic | Advanced | Anthropic's flagship. Highest quality. |
Used for generating and editing images within your apps.
| Model | Provider | Tier | Description |
|---|---|---|---|
| Gemini 2.5 Flash Image (Nano Banana) | Standard | Google's lightweight image model. Fast and affordable. | |
| Gemini 3.1 Flash Image (Nano Banana 2) | Standard | Google's fast tier. Quick, good quality. | |
| GPT Image 1.5 | OpenAI | Standard | OpenAI's standard tier. Balanced quality. |
| Grok Imagine Image | xAI | Standard | xAI's image model for generation and editing. |
| Gemini 3 Pro Image (Nano Banana Pro) | Advanced | Google's flagship. Best quality, higher res. |
Used for generating videos from text or images.
| Model | Provider | Tier | Description |
|---|---|---|---|
| Veo 3.1 Fast | Standard | Fast video generation with audio. Powered by Google Veo. | |
| Grok Imagine Video | xAI | Standard | xAI's video model for generation, image-to-video, and editing. |
| Veo 3.1 | Advanced | High quality video generation with audio. Powered by Google Veo. |
Used for text-to-speech generation.
| Model | Provider | Tier | Description |
|---|---|---|---|
| GPT-4o Mini TTS | OpenAI | Standard | OpenAI's fast tier. Natural, cost-effective. |