Skip to content

Providers & Models

clido supports multiple LLM providers. Each provider is configured as a profile in config.toml. You can switch providers at runtime with --provider and --model.

ProviderKey in configAPI key variableNotes
AnthropicanthropicANTHROPIC_API_KEYDefault; supports prompt caching
OpenAIopenaiOPENAI_API_KEYAny OpenAI-API endpoint
OpenRouteropenrouterOPENROUTER_API_KEYMulti-model aggregator
Google GeminigeminiGEMINI_API_KEY
DeepSeekdeepseekDEEPSEEK_API_KEY
MistralmistralMISTRAL_API_KEY
xAI (Grok)xaiXAI_API_KEY
GroqgroqGROQ_API_KEY
Together AItogetheraiTOGETHER_API_KEY
Fireworks AIfireworksFIREWORKS_API_KEY
CerebrascerebrasCEREBRAS_API_KEY
PerplexityperplexityPERPLEXITY_API_KEY
MiniMaxminimaxMINIMAX_API_KEYMiniMax-M2.7 coding model; 204k context
Alibaba CloudalibabacloudDASHSCOPE_API_KEYDashScope / Qwen models
Kimi (Moonshot)kimiMOONSHOT_API_KEY
Kimi Codekimi-codeKIMI_CODE_API_KEY
Local (Ollama)localNo API key required

The default provider. Connects to https://api.anthropic.com.

[profile.default]
provider = "anthropic"
model = "claude-sonnet-4-5"
api_key_env = "ANTHROPIC_API_KEY"
ModelDescription
claude-sonnet-4-5Best balance of capability and cost (default)
claude-3-opus-20240229Highest capability, highest cost
claude-haiku-4-5Fastest and cheapest; good for simple tasks

When using Anthropic, clido automatically enables prompt caching for the system prompt and long conversation histories. Cache hits are billed at ~10% of the normal input token price, which significantly reduces costs for long sessions.

Any server that implements the OpenAI Chat Completions API can be used. This includes Azure OpenAI, Together AI, Groq, Fireworks AI, and others.

[profile.azure]
provider = "openai"
model = "gpt-4o"
api_key_env = "AZURE_OPENAI_API_KEY"
base_url = "https://my-resource.openai.azure.com/openai/deployments/gpt-4o"
[profile.gpt4]
provider = "openai"
model = "gpt-4o"
api_key_env = "OPENAI_API_KEY"

OpenRouter provides access to many models (Claude, GPT, Mistral, Gemini, and more) through a single API key. Models are identified by provider/model-name.

[profile.openrouter]
provider = "openrouter"
model = "anthropic/claude-3-5-sonnet"
api_key_env = "OPENROUTER_API_KEY"
Model stringDescription
anthropic/claude-3-5-sonnetClaude 3.5 Sonnet via OpenRouter
openai/gpt-4oGPT-4o via OpenRouter
mistralai/mistral-largeMistral Large
google/gemini-pro-1.5Gemini Pro 1.5
meta-llama/llama-3.1-70b-instructMeta Llama 3.1 70B

Connects to https://api.minimax.io/v1. Get an API key at platform.minimax.io.

[profile.minimax]
provider = "minimax"
model = "MiniMax-M2.7"
api_key_env = "MINIMAX_API_KEY"
ModelDescription
MiniMax-M2.7Latest coding model; 204k context window
MiniMax-M1Previous generation reasoning model

Run models locally with Ollama. No API key or network connection required after pulling the model.

[profile.local]
provider = "local"
model = "llama3.2"
base_url = "http://localhost:11434"

Start Ollama and pull a model:

Terminal window
ollama serve # start the server
ollama pull llama3.2 # download the model

::: warning Local model limitations Local models generally have smaller context windows and weaker instruction-following than cloud models. Complex multi-step coding tasks may require a larger model (e.g. llama3.1:70b) for reliable tool use. :::

Connects to the DashScope OpenAI-compatible endpoint. Set DASHSCOPE_API_KEY or store the key in your profile.

[profile.alibaba]
provider = "alibabacloud"
model = "qwen-max"
api_key_env = "DASHSCOPE_API_KEY"

You can override the endpoint with base_url if needed (defaults to https://dashscope.aliyuncs.com/compatible-mode/v1).

List all models known to clido for a provider (from the built-in pricing table):

Terminal window
clido list-models
clido list-models --provider anthropic
clido list-models --provider openrouter --json

Fetching the current model list from a provider’s API

Section titled “Fetching the current model list from a provider’s API”

Retrieve the live model list from a provider:

Terminal window
clido fetch-models
clido fetch-models --provider openrouter

::: tip Updating pricing data If model pricing changes, update the local pricing table:

Terminal window
clido update-pricing

:::

Override the profile’s provider and model for a single run:

Terminal window
# Use a different model
clido --model claude-haiku-4-5 "quick task"
# Use a different profile
clido --profile local "quick task"
# Override both
clido --provider openrouter --model anthropic/claude-3-5-sonnet "task"

These flags only affect the current invocation; they do not modify config.toml.

For providers that need a custom endpoint (Azure, self-hosted, Ollama):

[profile.custom]
provider = "openai"
model = "my-model"
base_url = "https://my-server.internal/v1"
api_key_env = "MY_API_KEY"

The base_url is used as-is; clido appends /chat/completions (OpenAI-compatible providers) or the appropriate path for each provider.

clido looks for API keys in this order:

  1. Environment variable (e.g. ANTHROPIC_API_KEY)
  2. Credentials file (~/.config/clido/credentials, created automatically during setup with chmod 600 permissions)
  3. Environment variable named by api_key_env in the profile
  4. api_key field in the profile (legacy fallback — not recommended)

The recommended approach is to let the setup wizard store keys in the credentials file, or set the environment variable in your shell profile:

Terminal window
# ~/.zshrc or ~/.bashrc
export ANTHROPIC_API_KEY="sk-ant-..."