AI Readiness: Why the Foundations Matter More Than the Technology

AI Readiness: Why the Foundations Matter More Than the Technology

Most organisations are asking the wrong questions about AI.

AI
Andy Gibson
March 16, 2026
7 min read

There's no shortage of pressure on organisations to "do something with AI." Boards are asking about it. Competitors are announcing it. Vendors are selling it. And yet, a consistent picture is emerging from the organisations that have moved quickly: most AI initiatives don't deliver the bottom-line impact that was promised, and many don't make it out of pilot stage at all.

The reason is rarely the AI itself.

The Uncomfortable Truth About AI Adoption

Research consistently shows that the biggest blockers to successful AI adoption aren't technical limitations of the models — they're organisational. Legacy integration challenges, poor data quality, unclear governance, skills gaps, and undefined processes are the reasons AI projects stall or fail. These are not problems that a better model solves.

A 2026 survey of 1,000 global technology leaders found that 85% of organisations are planning AI adoption within the next 12 months, yet most lack the operational foundations to deploy it effectively. The gap between ambition and readiness is significant, and costly for those who discover it mid-engagement.

Traditional digital transformation projects tend to be bounded in scope; replacing a system, standardising a process, migrating a platform. AI adoption is different. The data and processes that an AI initiative touches are often non-obvious and cross-cutting, spanning departments and systems that wouldn't normally be considered part of the same project. That breadth is exactly why discovery work matters more here, not less.

What Actually Determines AI Success

Before any conversation about models, tools, or automation, three things need to be in good shape:

  • Processes. AI augments and automates processes. If a process is poorly defined, inconsistently followed, or not fully understood, applying AI to it doesn't fix it — it scales the problem. Quite often, especially in multi-vertical organisations, there is no clear visibility of what processes exist, at which levels, and who is ultimately responsible for them. This creates a challenge when the goal of introducing AI and Intelligent Automation is about improving process efficiency; How can you improve that which you do not understand?
  • Data. Every AI system, from a simple classifier to a sophisticated language model, depends on data. Its quality, accessibility, structure, and governance determine the ceiling of what's achievable. Organisations frequently discover during readiness work that data they assumed was clean and available is neither. You need to know what data you have, what shape it's in, where it lives, who owns it, and how it moves between systems.
  • Integration. A system is legacy as soon as it is implemented, don't be fooled by that "new software smell". Over time these systems that are brought in to solve specific problems begin to age, support contracts expire, technology stagnates and the people who once fought hard for their adoption have long since moved on. These systems often perform critical functions and hold key data, so it is not surprising that legacy integration is one of the most commonly cited blockers for AI adoption. These need to be accounted for as part of the AI conversation.

Without an accurate picture of all of these things from the start, any attempt at AI adoption will struggle.

Where GenAI and LLMs Actually Fit

There's a tendency to treat generative AI as the starting point. In practice, it should usually be the last consideration, not the first.

Large language models are powerful tools for the right problems: synthesising information, generating content, handling unstructured inputs, supporting decision-making. But they are not a substitute for data quality, process clarity, or system integration. Applied to weak foundations, they add cost and complexity without proportionate value.

Off-the-shelf LLMs will typically get you to around 60-70% accuracy for a specific industry or domain. To achieve genuine business value, more domain-specific training, fine-tuning, or implementation work is required — and that work is only worth doing once the foundations are in place. AI is a rapidly advancing field and it's easy to get swept up in the momentum and rush to a solution without fully considering what you're building on.

The right approach is to understand your processes and data landscape first, identify where automation and intelligent tooling would have genuine impact, and then determine which capabilities, including GenAI where appropriate, actually belong in the solution.

The AI Readiness Assessment

So if understanding the landscape is so important, how do you get there? This is where the AI Readiness Assessment comes in.

Fundamentally, an AI Readiness Assessment is the process of identifying what your organisation has (systems, processes, data, capabilities), identifying the inefficiencies that could have the biggest impact and then scoring how ready they are for enabling usage of AI or intelligent automation. It's like a pre-flight checklist before launching a much bigger AI initiative.

Before any of this work begins, there's a prerequisite: a clear sense of what problem the organisation is actually trying to solve. Without that anchor, even a thorough assessment can produce a map with no destination.

Every organisation is going to be different, so a one-size-fits-all assessment is not a good fit here but generally, the assessment would cover:

  • Process review: Mapping key operational processes, identifying automation candidates, and flagging areas where definition is needed before AI can add value.
  • Data landscape review: Assessing the quality, accessibility, and governance of the data that would underpin AI initiatives.
  • Integration audit: Understanding how your systems connect (or don't), and identifying integration dependencies for any proposed AI capability.
  • Capability and skills assessment: An honest evaluation of your organisation's current capacity to build, operate, and govern AI systems.
  • Opportunity mapping: A prioritised view of where AI and intelligent automation could add real value -- including quick wins and longer-term investments.
  • Risk identification: Identifying where gaps in readiness could derail or significantly inflate the cost of an AI initiative.

The output is clarity. A concrete picture of where you stand, what's genuinely achievable, what needs to be addressed first, and what a realistic path forward looks like.

The Cost of Skipping It

The alternative to a readiness assessment is starting delivery without one. Organisations that take this route often find themselves revisiting the foundations halfway through an engagement — at significantly greater cost. The "move fast" approach to AI adoption has produced a consistent pattern: impressive announcements followed by quiet shelving of initiatives that didn't scale.

This creates a distorted picture of what AI and intelligent automation can genuinely deliver when approached with the right scope, preparation, and execution. If the cost of implementation is too high, the perception of the technology is soured, people get burned and board members are left with more questions than answers. It's not theoretical, this is already happening!

A readiness assessment is a relatively small investment. The cost of discovering foundational problems six months into a delivery programme is considerably larger.

Truth is important

A good readiness assessment isn't a sales exercise. Its value depends entirely on its honesty. If your data isn't in good shape, you need to know that. If your processes need work before automation makes sense, that's the finding that saves you money. If GenAI isn't the right tool for the problem you're trying to solve, that should be in the report.

The organisations that get the most value from AI are the ones that invest in understanding their own readiness before they invest in the technology. That discipline, foundations before features, clarity before commitment, is what separates the initiatives that deliver from the ones that don't.

An AI Readiness Assessment should not be a months-long, costly endeavour. It doesn't require teams of consultants on expensive day rates. What it needs is a focused, honest breakdown of what you have, what you want to achieve, and where you need to direct effort to close the gap.

If you're considering an AI initiative in the next 12 months, an AI Readiness Assessment is a worthwhile first conversation to have, whether that's internally or with an external partner.

Andy Gibson

Written by

Andy Gibson

Principal Consultant & Solution Architect