Agent Architecture
The Agent is MetricChat's central execution engine that processes natural-language requests within your organizational context.
The Agent is MetricChat's central execution engine. It processes natural-language requests within your organizational context and delivers reliable outputs — queries, reports, and dashboards — suitable for team use.
Why It Matters
Data reliability is paramount. A wrong join or misapplied metric doesn't just create noise — it can drive wrong decisions. MetricChat's Agent operates through a validation-focused loop that prioritizes transparency and contextual awareness, going far beyond basic text-to-SQL approaches.
The Agent Loop
When you submit a request, MetricChat creates an Agent Run. The system assembles context from your data sources, dbt models, metrics, documentation, historical learnings, and custom instructions.
The agent then iterates through four stages:
- Think — Plan the next action
- Act — Execute an action (search context, seek clarification, generate query, build dashboard)
- Observe — Document outputs including results, schema information, errors, and validation results
- Reflect — Assess whether the request is resolved or requires additional steps
Nested Loops for Queries
Query generation includes a secondary validation loop:
- Model — Propose tables, joins, grain, and filters; validate against definitions
- Code — Generate SQL or Python; execute dry-runs and EXPLAIN statements; review types and row counts
- Reflect/Repair — Modify model or code; request user clarification when ambiguity prevents progress
Learning from Feedback
The agent captures feedback and learnings throughout execution:
- User satisfaction signals
- Code execution outcomes (success, errors, edge cases)
- Validation results
An optional Judge component evaluates context, instructions, and results for each run. When the system identifies applicable rules or terminology, it can suggest new instructions to improve future performance.
Tools
The Agent uses purpose-built tools for data analysis:
| Tool | Purpose |
|---|---|
| Create Data | Generate SQL/Python queries and execute them |
| Answer Question | Provide explanations and insights |
| Create Visualization | Build charts and graphs |
| Clarify | Ask the user for more information |
| Create Dashboard | Design and build dashboards |
| Search Context | Find relevant instructions, schemas, and metadata |
Configuration
Customize agent behavior in Settings > AI Settings:
| Setting | Description |
|---|---|
| LLM Data Visibility | Whether the AI can see actual data samples |
| Analysis Steps | Maximum iterations per request |
| Code Retry Limit | Maximum code generation retries |
| LLM Judge | Enable quality evaluation per run |
| Code Validation | Syntax and schema checking |
| Instruction Suggestions | Auto-suggest new instructions |