MetricChat
LLM Providers

LLM Providers

Configure your AI model — use any LLM provider or run models locally.

Supported Providers

ProviderModels
OpenAIGPT-4o, o1, o3-mini
AnthropicClaude Opus, Sonnet, Haiku
GoogleGemini Pro, Flash
Azure OpenAIGPT-4o (enterprise compliance)
OllamaLlama, Mistral, Phi, Qwen — any local model
Any OpenAI-compatible APIvLLM, LM Studio, Together AI, Groq

Configuration

Add your API key in Settings > LLM Configuration. You can switch providers at any time.

Local Models with Ollama

For maximum privacy, run a local LLM with Ollama. Zero tokens leave your network. Zero per-query cost after hardware.

# Install Ollama
curl -fsSL https://ollama.com/install.sh | sh

# Pull a model
ollama pull llama3

# Point MetricChat to Ollama
MC_LLM_PROVIDER=ollama
MC_OLLAMA_URL=http://localhost:11434

On this page