Introduction
MetricChat is the open-source AI Analyst — connect any LLM to any data source with centralized context management, trust, observability, and control.
Rather than treating the AI as a black box, MetricChat gives your team a governed, auditable analytics layer that speaks your business language. You define the rules, connect your data, choose your model, and let the agent handle the rest.
What MetricChat Does
MetricChat sits between your data sources and your team. Ask a question in plain English — in the web app or via Slack — and MetricChat's agentic reasoning loop runs tool calls, reflects on intermediate results, and returns visualizations, tables, or written reports with full provenance.
It is designed for data teams who want AI-assisted analytics without sacrificing accuracy, context, or control.
Four Key Pillars
Chat with Data
MetricChat's agent iteratively plans, queries, reflects, and synthesizes. It does not generate a single SQL query and hope for the best. Instead, it runs a multi-step reasoning loop: generating SQL, inspecting results, correcting course if needed, and producing a final answer with supporting charts or tables.
Users interact through a conversational web interface or Slack. Output formats include charts (16+ types via ECharts), data tables, and rich markdown reports.
Context-Aware and Customizable
Raw schema access is not enough. MetricChat lets you define:
- Instructions — Business rules, KPI definitions, terminology, and guidelines that shape every agent response.
- dbt integration — Pull model descriptions, column metadata, and lineage directly from your dbt project.
- LookML and Tableau — Parse and surface semantic layer definitions as context.
- Code repositories — Connect Git repos so the agent can read AGENTS.md, markdown docs, and other context files.
Every agent run is assembled from this context, so answers stay consistent with how your business actually works.
Any LLM, Any Data
MetricChat is provider-agnostic on both sides:
LLM providers: OpenAI, Anthropic, Google Gemini, Azure OpenAI, and local models via Ollama. Bring your own API key — MetricChat does not proxy or store credentials beyond your own deployment.
Data sources: Snowflake, BigQuery, Amazon Redshift, PostgreSQL, MySQL, AWS Athena, DuckDB, Salesforce, and more. Connect multiple sources and query across them within the same workspace.
File uploads: Upload CSV, Excel, or PDF files to instantly create queryable DuckDB data sources. There is no database setup or ETL pipeline required — upload a file and start asking questions immediately. This makes MetricChat useful for ad-hoc analysis, one-off datasets, and teams that work heavily with spreadsheets.
Transparency, Trust, and Deployment
Every agent decision is logged and reviewable. MetricChat tracks which queries ran, what the agent reasoned, and how the final answer was constructed. This makes it possible to audit responses, understand failures, and improve quality over time.
Deployment is flexible:
- Docker — Single-container deploy for quick starts and small teams.
- Docker Compose — Production-ready with Caddy reverse proxy and automatic TLS.
- Kubernetes / Helm — Scalable deployment with support for external managed databases and IAM authentication.
Security features include role-based access control (RBAC), SSO via Google OAuth and OIDC, and Fernet-encrypted credential storage.
Getting Started
- Quickstart — Deploy MetricChat and run your first query in under 5 minutes
- Deployment — Docker, Docker Compose, and Kubernetes deployment options
- Chat with Data — How to ask questions and interpret results
- Instructions — Add business context and rules for the AI
- Data Sources — Connect databases, warehouses, and file uploads
- Dashboards — Build and share visual dashboards
- Context — How the agent context system works
- Agent Architecture — The reasoning loop under the hood
- Monitoring — Observability and quality tracking