Skip to content
Curated boutique product display
All articles
Decision Science 25 February 20267 min read

Causal AI for non-technical operators: a primer

You do not need to understand the maths. You do need to understand the difference between correlation and cause, and why that difference is worth real money.

JS

Jaswant Singh

Founder, Kauzio

The vast majority of business analytics is correlation work. You ran the promotion, sales went up. You moved the display, footfall changed. You launched the loyalty card, retention improved. The story writes itself.

The problem is that the story is almost always partly wrong, and the part that is wrong is the part that costs you money.

Causal AI is the discipline of separating *what happened because of you* from *what happened anyway*. It is not glamorous. It does not produce beautiful dashboards. It is, however, the difference between a marketing budget that compounds and one that quietly disappears.

This primer is for operators — shopkeepers, founders, ops leads. There is no maths. You do not need it. You need a working mental model.

The story problem

Imagine you run a small chain of three stores. You launch a 15% off promotion in store A. Sales rise 22%. You conclude the promotion worked.

You did not consider that the weather was unseasonably warm that week. You did not consider that a competitor closed temporarily three streets away. You did not consider that store A's till was broken the previous week and pent-up demand spilled over. You did not consider that the promotion ran the same week as payday.

The 22% lift is real. The attribution to the promotion is mostly wrong. The next time you run the promotion, in less favourable conditions, the lift is 3% and the discount cost more than it brought in.

This is the central problem of operating any business: you can see the outcome, but you cannot see the counterfactual. You cannot see what would have happened if you had done nothing.

What causal AI actually does

Causal AI's job is to estimate the counterfactual — the version of reality where you did not run the promotion, did not move the display, did not hire the new staffer. It does this by finding patterns in the data where similar conditions occurred without the intervention, and uses them as a control group.

This is not magic. It is the same logic used in clinical trials, only run retrospectively on data you already have. The technique has a long academic lineage — Judea Pearl's work on causal graphs, the potential outcomes framework, difference-in-differences from econometrics. None of which you need to know to use it.

What you need to know is the question it answers: *of the change I saw, how much was actually because of me?*

A worked example

A retailer in our cohort ran a "buy two, get one free" weekend across all SKUs in the homewares aisle. Weekend revenue in the aisle was up 38% versus the prior weekend. The owner, understandably, declared the promotion a success.

When we ran the same data through a causal model — comparing the homewares aisle to similar non-promoted aisles, controlling for weather, day-of-week, and a payday effect — the estimated true lift was 9%. The other 29 points were noise, seasonality, and a happy accident with the weather.

Worse: the gross margin given away on the BOGOF was greater than the 9 points of true lift. The promotion lost money. The dashboard said it was the best weekend of the quarter.

This is the single most expensive kind of mistake in retail. The data tells you to do more of the thing that is quietly losing you money.

What you can do without any software

Three habits, in order of value.

One: write down what you expected before you intervened. A specific number, in writing, before the promotion launches. If actual exceeds expected by less than your expected margin of error, you cannot claim a lift.

Two: keep a control. Do not run the promotion in every store at once. Run it in three of your five, leave two as untreated. Compare. This is not academic — this is the same thing every serious advertising team in the world has been doing for a decade.

Three: discount your wins by 50% in the first reading. Your first instinct about why something worked is almost always too generous to your intervention. Halve it. The halved version is closer to the truth than the original.

Where the AI actually helps

Software earns its keep in two places. It can keep enough history that the "natural experiments" — the weeks where you did not run a promotion, where you can use as control — are properly cached and aligned. And it can do the comparison automatically, so the operator does not need to remember to do it. The maths is not the hard part. The discipline is the hard part.

In Kauzio, this lives behind a single function: when you propose a decision, the engine estimates the true impact, not the surface-level one, by referencing the closest natural experiments in your own history. It is not a prediction. It is a discount applied to your own optimism. The discount is usually about 40%.

The bigger picture

Most operators believe they are running a business. They are mostly running a story about a business — a self-serving narrative about which interventions worked and which did not. Causal AI is not a tool, in the long run. It is a way of being slightly less wrong about your own story. That is, in our view, what intelligence is.

You do not need a PhD. You need a notebook, a control group, and the willingness to discount your wins. The software is just the part that keeps you honest on a Tuesday.

#ai#causal inference#retail

Read next