MetricChat
Core Concepts

Agent Architecture

The Agent is MetricChat's central execution engine that processes natural-language requests within your organizational context.

The Agent is MetricChat's central execution engine. It processes natural-language requests within your organizational context and delivers reliable outputs — queries, reports, and dashboards — suitable for team use.

Why It Matters

Data reliability is paramount. A wrong join or misapplied metric doesn't just create noise — it can drive wrong decisions. MetricChat's Agent operates through a validation-focused loop that prioritizes transparency and contextual awareness, going far beyond basic text-to-SQL approaches.

The Agent Loop

When you submit a request, MetricChat creates an Agent Run. The system assembles context from your data sources, dbt models, metrics, documentation, historical learnings, and custom instructions.

The agent then iterates through four stages:

  1. Think — Plan the next action
  2. Act — Execute an action (search context, seek clarification, generate query, build dashboard)
  3. Observe — Document outputs including results, schema information, errors, and validation results
  4. Reflect — Assess whether the request is resolved or requires additional steps

Nested Loops for Queries

Query generation includes a secondary validation loop:

  1. Model — Propose tables, joins, grain, and filters; validate against definitions
  2. Code — Generate SQL or Python; execute dry-runs and EXPLAIN statements; review types and row counts
  3. Reflect/Repair — Modify model or code; request user clarification when ambiguity prevents progress

Learning from Feedback

The agent captures feedback and learnings throughout execution:

  • User satisfaction signals
  • Code execution outcomes (success, errors, edge cases)
  • Validation results

An optional Judge component evaluates context, instructions, and results for each run. When the system identifies applicable rules or terminology, it can suggest new instructions to improve future performance.

Tools

The Agent uses purpose-built tools for data analysis:

ToolPurpose
Create DataGenerate SQL/Python queries and execute them
Answer QuestionProvide explanations and insights
Create VisualizationBuild charts and graphs
ClarifyAsk the user for more information
Create DashboardDesign and build dashboards
Search ContextFind relevant instructions, schemas, and metadata

Configuration

Customize agent behavior in Settings > AI Settings:

SettingDescription
LLM Data VisibilityWhether the AI can see actual data samples
Analysis StepsMaximum iterations per request
Code Retry LimitMaximum code generation retries
LLM JudgeEnable quality evaluation per run
Code ValidationSyntax and schema checking
Instruction SuggestionsAuto-suggest new instructions

On this page