From AI pilots to enterprise impact: Why most AI strategies stall - and how leaders can fix it.
Senior leaders gathered to confront a defining challenge in enterprise AI strategy: why, despite unprecedented investment and enthusiasm, does so much AI in the enterprise fail to deliver meaningful business impact?
What emerged from the HotTopics Food for Thought discussion was not a critique of technology, but a far more uncomfortable truth. The barriers to scaling AI are deeply human and rooted in strategy, culture, governance and leadership clarity. While the tools themselves have become radically more accessible, the ability of organisations to deploy them coherently has not kept pace.
Meet the speakers
With Doug Drinkwater moderating the HotTopics Food for Thought panel, the speakers included:
- Chris Kelly, UK Marketing Director, Hari
- Faith Wheller, VP Marketing Director, TeamViewer
- Kerry Phillips, Head of Data Governance, Penguin Random House UK
- Kevin Cassar, Chief Data and AI Officer, TalkTalk
- Raghava Krishna, Technology Director, The AA
- Laurence Heinrichs, Chief Digital Officer, Publicis Media
- Paul Watts, CISO, Keywords Studios
- Noemi Morelo Fabelo, Head of Data Strategy and Architecture, Cambridge University Press & Assessments
- Oleg Kravets, Chief Data & Analytics Officer, The Travel Corporation
- Cody Irwin, AI Adoption Director, Domo
- Iain Congdon, Director Technical Consulting, Domo
- Jamie Morrison, EMEA Field CTO, Domo
The AI scaling problem: From experimentation to execution
There was near-universal agreement that organisations have become exponentially good at starting with AI; but less consistent when it comes to carrying initiatives through to completion, a core AI adoption challenge.
“There’s a bit of a disconnect, or a gap between AI intention and AI outcomes… It’s very easy to start pilots… but then moving that from pilot into production is a bit of a sticking point.” explains Iain Congdon, Director of Technical Consulting at Domo.
This growing AI pilot to production gap is not accidental. The consumerisation of AI has lowered the barrier to entry to almost zero. Anyone with access to a large language model (LLM) can generate prototypes, build lightweight tools or automate fragments of workflows. But, the ease of this experimentation has created a false sense of progress.
Leaders are often surrounded by plenty of activity - demos, proof of concepts and sanctioned (or otherwise) internal tools - yet struggle to point to sustained, enterprise-wide value or measurable AI ROI.
In some cases, the problem is more structural that tactical:
“People build AI pilots because it’s shiny, with no intention of scaling… you just waste money just to be able to claim onto AI.”
This speaker’s observation reflects a broader issue behind AI project failure; AI is increasingly being treated as a signalling mechanism rather than an operational capability. It demonstrates innovation to boards, investors, donor markets, but lacks the operational grounding required to deliver results.
Without a clear path from pilot to production, participants argued that AI risks becoming performative rather than transformative, undermining long-term AI transformation efforts.
Strategy before technology: The discipline most organisations skip
A consistent thread throughout the discussion was the absence of a clear AI enterprise strategy at the outset. Too often, organisations begin with the tool rather than the problem:
“Often it’s not a technology issue, it’s a strategy issue… governance, data foundation, culture… all of these issues contribute to getting out of that pilot phase” says Congdon.
This misalignment is augmented by the lack of clarity around success metrics; a critical blocker to demonstrating AI ROI. The speakers argued that many AI projects are launched without an agreement on what constitutes value, how it will be measured or how it connects to broader organisational objectives.
“If you don’t agree on what success is and how to measure it, you don’t know whether it’s working.”
The implication for the AI leadership C-suite is significant. AI cannot be delegated as a purely technical initiative. It requires the same rigor that is applied to any strategic investment; that means clear objectives, defined outcomes, and accountability mechanisms. They argue that without these, even technically successful projects will fail to gain traction or justify even further investment.
There was also a recognition among the table that organisations often overreach too early, jumping into ambitious, high-risk initiatives without building out their foundational capabilities. The result? This created a mismatch between aspiration and readiness, which inevitably leads to stalled progress, taking leaders back to square one.
Where AI is delivering real value
Amid the challenges, the discussion also surfaced several areas where AI is already generating measurable impact. What distinguishes these examples is not the sophistication of the technology, but rather the clarity of the problem being solved.
Operational efficiency emerged as a recurring theme during the discussion. In one case, AI agents were deployed to process unstructured claims data, replacing manual triage and summarisation tasks: “It looks at all the emails, summarises it and redirects it… that previously was manual.”
The speakers argued that this type of application succeeds because it targets a well-understood bottleneck with clear metrics; time saved, errors reduced, throughput increased. What is clear here is that the value is tangible and defensible.
Similarly, customer service use-cases demonstrated strong returns, particularly where AI could handle high-volume, low complexity interactions:
“There is 35 percent deflection of chats that were answered by the bot… that is hard value,” explains Cody Irwin, AI Adoption Director at Domo.
It is not all good news, however, even here leaders cautioned against oversimplification. Metrics can be framed in ways that overstate impact, and without robust testing and AI risk management, it is difficult to isolate the true drivers of performance.
“There are two ways you can spin the results… you need to test what actually drives value.”
Culture: The hidden constraint on AI adoption
While strategy and data are critical, many participants argued that culture remains one of the biggest AI adoption challenges. Organisations are simultaneously facing resistance and overdependence; two opposing forces that are equally problematic.
On one side there is fear: “There’s definitely a lot of pushback… it’s seen as a threat… fear of job displacement,” says Congdon.
This fear is not irrational. For many employees, they argued that AI represents not just a new tool, but a potential challenge to their role, expertise, and identity. Without clear communication and support, this can lead to disengagement or even passive resistance.
On the other hand, there is excessive reliance: “Even emails between teams are just full of AI flop… I then have to use AI to digest it.”
That overdependence introduces a different kind of risk. The speakers argued that instead of augmenting human capabilities, AI begins to replace it in ways that degrade quality and accountability. Professionals who once applied judgement and expertise may default to machine-generated outputs without sufficient scrutiny.
“People are just running stuff through AI… without any human validation.”
AI governance, risk, and the myth of outsourced accountability
Risk and AI governance emerged as central themes throughout the discussion, particularly in relation to third-party tools and platforms. Many organisations are relying on external vendors to accelerate AI adoption, but this has created a dangerous misconception.
“You may outsource the technology… but you never outsource the risk, and you never outsource the outcome.”
Even when using SaaS solutions, organisations remain accountable for AI risk management. This includes legal liability, reputational damage, and regulatory compliance. Several examples highlighted how quickly things can go wrong, from chatbots providing incorrect information to sensitive data being exposed through poorly governed systems.
This is where AI data governance becomes critical; without transparency and explainability, organisations cannot defend AI-driven decisions.
“If we need to stand in front of the court… will they open up to explain how they make decisions?”
The speakers argued that in highly regulated environments, this is a concern. If organisations cannot explain how AI systems reach conclusions, they cannot defend them. This places governance at the centre of the AI agenda, not as a compliance exercise, but as a prerequisite for trust.
The leadership challenge: Ownership and accountability
One of the most unresolved questions in the discussion was ownership in AI leadership. As AI cuts across technology, data, legal, and business functions, traditional organisational structures struggle to contain it.
“There is an open question around ownership… I don’t think that’s resolved.”
As AI spans multiple functions, this ambiguity can lead to fragmentation, with different teams pursuing disconnected initiatives. It can also create accountability gaps, where no single leader is responsible for outcomes.
At the same time, the democratisation of technology is shifting responsibility outward:
“We now live in a world where technology is enabled and usable by everybody… you can’t just go back to tech or security.”
This represents a fundamental change. In the past, technical complexity concentrated power, and responsibility, within specialist teams. Today, accessibility distributes both. Business users can build and deploy AI-enabled tools, but they must also own the consequences.
“You have to understand input, process, and output… otherwise how can you stand behind it?”
Balancing curiosity and control: The new operating model for AI
As the discussion drew to a close, a more nuanced view of AI began to emerge; one that balances experimentation with discipline. Organisations must create space for curiosity and innovation, while maintaining the controls necessary to scale safely and effectively.
This balance can be understood through a simple but powerful framework:
- Define the problem before applying AI – avoid “solution-first” thinking
- Establish clear success metrics upfront – ensure value can be measured
- Invest in data foundations and governance – enable reliable outcomes
- Design for adoption, not just deployment – support people through change
- Maintain human oversight and validation – prevent overreliance and error
- Embed accountability across the organisation – align ownership with action
The future of work: Creativity at risk or AI opportunity unlocked?
Perhaps the most thought-provoking part of the conversation focused on the long-term implications of AI for human creativity and decision-making; a key dimension of AI transformation. There was both optimism and concern.
On one hand, AI is seen as an amplifier of human potential:
“It’s like a Michelin star chef being given the most sharp knife… you can do incredible things with it.”
On the other, there is a fear that convenience may come at the cost of capability:
“I’m worried about this quest for convenience… that people will stop thinking.”
This tension reflects a broader societal shift. As AI becomes more embedded in daily work, the risk is not just automation, but dependency. If individuals defer too much to machines, critical thinking and creativity may atrophy.
For organisations, this raises questions about talent development, education, and the future of expertise. The goal cannot simply be efficiency. It must also be the preservation, and enhancement, of human judgment.
Join the AI to Impact community
Join a community of data, digital, technology and marketing leaders focused on turning AI ambition into measurable business impact, bringing together executives who are strengthening data foundations while experimenting with AI in practical, outcome-driven ways.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.