Back to Documentation

AI Models

Learn how the platform selects AI models and how to customize model behavior in your apps.

How it works by default

You don't need to configure anything. When your app makes an AI request, the platform automatically picks the best available model based on the user's subscription plan.

Free & All access users

Get the standard tier model by default. These models are fast, capable, and cost-effective for most tasks.

Super users

Get the advanced tier model by default. These are the most powerful models available, with the highest quality output.

Graceful fallback

In the default (automatic) mode, the platform gracefully handles capability mismatches. If the primary model doesn't support a feature your app needs (for example, a specific attachment type or web search), the platform automatically tries other models in the same tier before falling back to a lower tier. Your user's request still succeeds — they never see an error.

This fallback only applies when you let the platform choose the model. If you explicitly set a model, provider, or tier constraint, the platform respects that choice exactly — and returns an error if the selected model can't handle the request. See Customizing AI behavior below.

Model tiers

Every AI model on the platform belongs to one of three tiers. Tiers reflect the balance between speed, cost, and quality.

Lite

The fastest and cheapest models. Great for simple tasks like formatting, classification, or quick summaries.

Available to all users

Standard

Balanced models that handle most tasks well. This is the default for most users and covers the vast majority of use cases.

Available to all users

Advanced

The most capable models with the highest quality output. Best for complex reasoning, creative writing, and demanding tasks.

Available to Super subscribers

Customizing AI behavior

If the defaults don't fit your use case, you have three options to control which model your app uses. These can be set at the hook level (applies to all requests) or per individual request.

Option 1: Choose a tier with modelTier

Request a specific quality level without locking into a particular model. The platform picks the best model within that tier.

// Use a fast, cheap model for simple tasks
const { text } = useAIText({ modelTier: "lite" })
// Always use the best available model
const { messages } = useAIChat({ modelTier: "advanced" })

Option 2: Prefer a provider with modelProvider

Prefer a specific AI provider. The platform will choose the best model from that provider within the user's available tier.

// Prefer Google models
const { object } = useAIObject({ modelProvider: "google" })

Option 3: Pin a specific model with model

Use an exact model by name. This is the most specific option and gives you full control, but it tightly couples your app to that model.

// Pin to a specific model
const { text } = useAIText({ model: "gemini-3-flash-preview" })

Per-request overrides

All three options can also be passed per request, overriding the hook-level setting for that specific call.

// Default to standard, but use advanced for this one request
const { submit } = useAIChat()
submit("Analyze this data", { modelTier: "advanced" })

Reasoning effort

Reasoning effort controls how much "thinking" a text model does before responding. Higher effort produces more thoughtful, accurate answers but takes longer and uses more credits. This option only applies to text models (useAIChat, useAIText, useAIObject) and has no effect on image, video, or speech generation.

minimal

Fastest. Best for trivial tasks like formatting or classification.

low

Light reasoning. Good for straightforward questions and simple generation.

medium

Balanced. The default for most models. Handles most tasks well.

high

Maximum depth. Best for complex analysis, math, or multi-step reasoning.

// Use minimal reasoning for a simple task
const { text } = useAIText({ reasoningEffort: "minimal" })

Available models

Here are all the models currently available on the platform, grouped by modality.

Text models

Used by useAIChat, useAIText, and useAIObject for conversations, text generation, and structured data extraction.

ModelProviderTierDescription
Gemini 3.1 Flash LiteGoogleLiteGoogle's most efficient model. Ultra-fast and cost-effective.
GPT-5.4 NanoOpenAILiteOpenAI's fastest and most affordable GPT-5.4 variant for lightweight tasks.
Gemini 3 FlashGoogleStandardGoogle's most balanced model. Quick, accurate, affordable.
GPT-5.4 MiniOpenAIStandardA faster, more cost-efficient version of GPT-5.4 for well-defined tasks.
Claude Haiku 4.5AnthropicStandardAnthropic's fastest model with near-frontier intelligence.
Gemini 3.1 ProGoogleAdvancedGoogle's best model with world-class multimodal understanding.
GPT-5.4OpenAIAdvancedOpenAI's latest model for coding and agentic tasks across industries.
Claude Sonnet 4.6AnthropicAdvancedAnthropic's mid tier. Strong accuracy, fast.
Claude Opus 4.6AnthropicAdvancedAnthropic's flagship. Highest quality.

Image models

Used for generating and editing images within your apps.

ModelProviderTierDescription
Gemini 2.5 Flash Image (Nano Banana)GoogleStandardGoogle's lightweight image model. Fast and affordable.
Gemini 3.1 Flash Image (Nano Banana 2)GoogleStandardGoogle's fast tier. Quick, good quality.
GPT Image 1.5OpenAIStandardOpenAI's standard tier. Balanced quality.
Grok Imagine ImagexAIStandardxAI's image model for generation and editing.
Gemini 3 Pro Image (Nano Banana Pro)GoogleAdvancedGoogle's flagship. Best quality, higher res.

Video models

Used for generating videos from text or images.

ModelProviderTierDescription
Veo 3.1 FastGoogleStandardFast video generation with audio. Powered by Google Veo.
Grok Imagine VideoxAIStandardxAI's video model for generation, image-to-video, and editing.
Veo 3.1GoogleAdvancedHigh quality video generation with audio. Powered by Google Veo.

Speech models

Used for text-to-speech generation.

ModelProviderTierDescription
GPT-4o Mini TTSOpenAIStandardOpenAI's fast tier. Natural, cost-effective.

Ready to start building?

Now that you understand how AI models work on the platform, it's time to create something amazing.