Most AI adoption programmes fail at the same point: the tool gets deployed, the training happens, and then nothing changes. People route around it. The old workflows persist. The investment sits idle while leadership waits for results that never arrive.
Buying an AI tool is easy. Getting an organisation to actually change how it works is hard. The tools are ready. The models are capable. The bottleneck is always the same: nobody has done the serious work of figuring out which processes should change, how they should change, who needs to be convinced, and what governance needs to be in place before you hand people powerful new tools and hope for the best.
I have seen teams convinced they were using AI productively when they were mostly generating content nobody read and summaries nobody acted on. The technology worked. The adoption didn’t.
The Methodology
Five Steps, In This Order
Sequence matters. Most adoption failures come from skipping step one and going straight to step three.
01
Honest Assessment First
Before recommending anything, I look at what you’re actually doing — which processes exist, which are broken, which ones AI could plausibly help with, and which ones you shouldn’t touch yet. Most organisations skip this step and pay for it later.
02
Governance Before Tools
Who can use AI for what? What data can go into which systems? What needs human review before it leaves the building? These questions need answers before you deploy anything.
04
Staff Enablement
Training that goes beyond the basics — specific to your tools, your workflows, and the actual things your people are trying to do. Not a generic AI literacy course.
03
Tool Evaluation
An honest comparison of what’s available against what you actually need. Not what’s been marketed to you — what fits your workflow, your data, and your risk tolerance.
05
Process Re-Engineering
The workflows that AI actually changes get redesigned — not just augmented. This is where the real productivity gains come from, and it’s the step most adoption programmes skip entirely.
Common Mistakes
What Goes Wrong and Why
The five patterns that appear in almost every failed AI adoption.
✕
Tool-first thinking
Buying the tool before deciding what problem you’re solving. The tool becomes the objective instead of the means.
✕
Governance as an afterthought
Deploying AI across the organisation and then writing the usage policy six months later, usually after something goes wrong.
✕
Training without follow-through
A workshop, a certification, and then nothing. People go back to what they know because nobody changed the actual workflow.
✕
Mandating without buy-in
“Everyone must use AI for X by Q3” creates compliance theatre — people use the tool in ways that look productive and aren’t.
✕
Measuring activity instead of outcomes
Counting AI interactions, prompts sent, or hours “saved” by a tool that doesn’t measure savings. None of this is a business result.
Deliverables
What You Get at the End of the Engagement
An organisation that actually uses AI differently — not one that bought a tool, ran the training, and moved on.
✓
Current-state assessment
An honest picture of what you’re doing now, what’s broken, and where AI is and isn’t the right answer
✓
Governance framework
Usage policies, data handling rules, and review requirements — before deployment, not after
✓
Tool evaluation report
A comparison of options against your actual requirements, with a recommendation and the reasoning behind it
✓
Enablement programme
Training designed around your specific tools and workflows, not generic AI literacy
✓
Change management plan
Who needs to be convinced of what, and how — with a realistic timeline
✓
90-day review framework
How you’ll know whether the adoption is working — measurement that ties to business outcomes
Getting Started
Let's Talk About What You're Actually Trying to Do
Tell me where you are — what’s been tried, what didn’t work, and what you’re hoping to achieve. You’ll get an honest read on what’s blocking adoption and what it would actually take to change it.