Explaining AI to a Board That Just Watched a Vendor Demo

Explaining AI to a Board That Just Watched a Vendor Demo

A Gartner survey from late 2025 put the number at 64%. Sixty-four percent of boards disappointed with their AI results. I was not surprised.

The pattern is always the same. A vendor gets 45 minutes with your board. They show a demo. The demo is perfect. It finds patterns no human could find. It answers questions in plain English. It looks like magic. The board approves a budget. Six months later, engineering delivers something that works but looks nothing like the demo. The board feels cheated. Engineering feels blamed.

This is not an engineering failure. It is a communication failure. And I have made it myself, more than once.

How I learned this

At PwC, I advised over 25 CIOs on technology strategy. Many were trying to explain AI to their boards for the first time. The common mistake: they led with the technology. They showed what the model could do. They used words like "inference" and "tokens" and "fine-tuning". The board smiled, nodded, approved. Then the bill came. Then the timeline slipped. Then the accuracy turned out to be 78%, not the 99% the demo implied.

Later, at a large logistics company, I presented to the Technology Risk Committee and the AI Ethics Committee. Those were not audiences that wanted to be impressed. They wanted to know what could go wrong. That changed how I prepared entirely.

Now I build AI platforms for retail. Same gap, seen from the other side. The board expects the demo. We deliver something useful but less pretty. Over the years I settled on five rules. Nothing original. They just work.

1. Show the business case first, not the demo

The moment you show a demo, you lose control. The board will anchor on what they saw. Every question after that becomes "why does ours not look like that?"

Start with the problem instead. What is costing us money? What is slow? Where are we losing customers? Then explain how AI changes one number. Not five. One. If you cannot connect the AI project to a single financial metric, you are not ready to present.

I once opened a board meeting with: "We lose 1.2 million pounds a year to manual classification errors in this process. An ML model can cut that by 60%. The investment is 400k in year one." No demo. No slides about neural networks. Approval took about ten minutes. I was more surprised than anyone.

2. Say what AI cannot do before you say what it can

Boards are not stupid. They read the same headlines you do. What they rarely get is an honest person telling them the limits.

I tell boards three things every time. AI will get some answers wrong, and we need to design for that. AI needs good data, and our data has gaps we should be honest about. AI models need maintenance like any other system.

Once they know the limits, the capabilities land differently. They stop expecting magic and start expecting a tool. That is a much better starting point for a funding decision.

3. Use their language, not yours

No board member cares about your model architecture. They care about EBITDA, margin, and risk to the brand.

I keep a rough translation table in my head. "We fine-tuned the model" becomes "we trained the system on our own data so it understands our business". "Inference costs" becomes "the running cost per transaction". "Hallucination" becomes "the system will sometimes give confident wrong answers, and here is how we catch them".

At the AI Ethics Committee, I never once said "large language model". I said "a system that generates text based on patterns in training data, which means it can produce plausible nonsense". That one sentence did more for informed governance than any technical briefing I have written.

4. Give them three questions to ask you

Most board members want to ask good questions about AI but do not know where to start. So I give them the questions. At the end of every board paper, I include a short section: "Questions the board should ask about this programme."

Three is the right number. More and they will not read them.

I pick questions that matter for the sector. For retail: "What happens to this model when consumer behaviour shifts in a recession?" For logistics: "What is our fallback if the model goes down during peak season?" For financial services: "How do we prove to the regulator that this model is not discriminating?"

This does two things. It makes the board feel competent, which matters more than you think. And it forces me to prepare answers for the hard questions before I walk into the room.

5. Bring the bad news early

The worst thing you can do with a board is surprise them. If the timeline is slipping, say so at month two, not month five. If accuracy is below target, show the real numbers, not the best run from last Tuesday.

I once showed a board our model accuracy at 73%. The target was 85%. I could have waited and hoped the numbers improved. Instead I showed it early, explained why, and laid out the plan to close the gap. One of the non-exec directors said something like: "Thank you for not giving us the polished version." That director became the project's strongest advocate in later budget discussions.

Bad news at month two costs you a difficult conversation. Bad news at month five costs you the programme.

The real gap

That 64% is not about AI failing. AI does roughly what AI does. The gap is between what the board was told and what got delivered. That gap belongs to us. Not the vendor. Not the board. Us.

If you are a technology leader presenting AI to your board, your job is not to impress them. Your job is to make them informed enough to make a good decision. Plain language. Honest numbers. No demos until the business case is agreed.

The boards I have worked with are not afraid of AI. They are afraid of spending money on something they do not understand. Fix the understanding and the rest follows.