Why Context Is Everything in AI Analytics
Generic AI tools give generic answers. Learn how MetricChat's context layer turns your LLM into a domain expert that understands your business metrics, naming conventions, and reporting rules.
Every analytics team has a version of the same story. Someone pastes a schema into ChatGPT, asks for a revenue query, and gets back something that looks reasonable — until it isn't. The SQL runs. The number appears. And then, three days later in a board meeting, someone notices the figure is off by 40% because the AI joined on the wrong key, used gross revenue instead of net, and included trial accounts that should have been excluded.
Generic AI tools are not analytics tools. They are text completion engines with broad world knowledge and no knowledge of your business. That gap — between general language capability and domain-specific understanding — is where most AI analytics projects fail.
The Problem with Generic AI
When you ask a general-purpose LLM to write a SQL query against your data warehouse, it faces an immediate information deficit. It does not know:
- That your
userstable contains both active customers and churned trial accounts, and that revenue reports should filter onsubscription_status = 'active' - That
mrris stored in cents in your database and must be divided by 100 before display - That your team calls the metric "Net New MRR" but it is actually calculated as
new_mrr + expansion_mrr - contraction_mrr - churned_mrr - That
orders.created_atis in UTC but your finance team reports in US/Eastern time
Without this knowledge, the model guesses. And because LLMs are trained to produce fluent, confident output, the guesses look correct. They follow SQL syntax. The column names are plausible. The logic is internally consistent. The answer is just wrong.
This is not a model capability problem. GPT-4, Claude, and Gemini are all sophisticated enough to write accurate queries — but only when they understand the domain. The missing ingredient is context.
Context as the Difference Maker
Context means giving the AI the same working knowledge that a senior analyst on your team carries in their head: which tables are canonical, how metrics are defined, what filters are always implied, where edge cases live.
The challenge is that this knowledge is rarely written down. It exists in Slack messages, in tribal memory, in the unspoken assumptions that experienced team members apply automatically. When someone new joins the analytics team, it takes months before they stop making the same mistakes. An AI without explicit context is permanently in that "first week" state.
MetricChat addresses this directly through the Instructions feature — a structured way to define your business context once, so that every query the AI generates is informed by it automatically.
What Instructions Look Like
Instructions are plain-text definitions that live in your MetricChat workspace. They are not code, not configuration files, and not a specialized query language. They are written in natural language with enough structure to be unambiguous.
A single instruction might define a metric:
metric: Net New MRR
description: >
The net change in monthly recurring revenue for a given period.
Always calculated as new_mrr + expansion_mrr - contraction_mrr - churned_mrr.
Values are stored in cents in the database; divide by 100 for display.
source_table: mrr_movements
date_column: movement_date
filters:
- exclude accounts where account_type = 'internal'
- exclude accounts where subscription_status = 'trial'
notes: >
Do not use the `revenue` column on the orders table for MRR calculations.
That column reflects one-time charges and is not part of the subscription model.Another instruction might document a naming convention:
convention: Timezone handling
rule: >
All timestamps in the warehouse are stored in UTC.
Finance reports should convert to America/New_York before aggregating by day or month.
Use DATE_TRUNC after timezone conversion, not before.
applies_to:
- all revenue and subscription metrics
- cohort analysisAnd another might describe a join rule that is easy to get wrong:
join_rule: users to subscriptions
description: >
The users table has a one-to-many relationship with subscriptions.
When calculating per-user metrics, always join on users.id = subscriptions.user_id
and filter WHERE subscriptions.is_primary = true to avoid double-counting.
Users without a primary subscription should be included as churned.None of this is exotic. It is simply the documentation your team should have written anyway — and in MetricChat, it directly shapes every query the AI generates.
How the Instructions Layer Works
When a user asks MetricChat a question, the system does not send the raw question to the LLM and hope for the best. It first retrieves the relevant instructions from the workspace, assembles them into a context document alongside the schema, and includes all of it in the prompt.
The result is that the model is not reasoning from scratch about what "Net New MRR" means. It has a precise definition in front of it. It knows the source table, the calculation logic, the unit of measurement, and the filters that must always be applied. The space for hallucination shrinks dramatically.
For teams with complex data models — multi-tenant SaaS products, marketplaces with buyer and seller sides, companies that have grown through acquisitions — this matters enormously. The more idiosyncratic your data, the more value explicit context delivers.
The Compounding Value of Good Instructions
Instructions are not a one-time setup cost. They are an investment that compounds.
When an analyst notices that the AI made a wrong assumption — say, it forgot to filter internal test accounts — they add an instruction. The next time anyone on the team asks a related question, the correction is already in place. Over weeks and months, the instruction set grows into a living knowledge base that reflects how your team actually thinks about your data.
This is meaningfully different from the alternative: individually correcting every query, re-explaining the same edge cases to every new user, and hoping that institutional knowledge survives team turnover. Instructions make implicit knowledge explicit and durable.
There is also a documentation side effect that teams consistently find valuable. The process of writing instructions forces clarity. Disagreements about how a metric should be calculated — disagreements that often sit unresolved for years — have to get settled before the instruction can be written. The AI becomes a forcing function for alignment.
The Takeaway
AI analytics tools are only as good as the context they operate within. A powerful language model with no business context will produce fluent, confident, wrong answers. The same model with precise metric definitions, naming conventions, and reporting rules will produce queries that a senior analyst would recognize as correct on first read.
MetricChat's Instructions feature exists to close that gap. The investment is modest — defining your most important metrics and conventions takes hours, not weeks — and the return is an AI assistant that behaves like a domain expert rather than a well-read generalist. For teams serious about accuracy, context is not a nice-to-have. It is the entire foundation.