
When will AI start to pay off?
This was the central tension explored during a recent Food for Thought roundtable hosted by HotTopics and Box, by bringing together senior leaders from finance, technology, the third sector, and more. Over a three course lunch, the conversation moved beyond the surface-level AI hype to uncover a series of more complex truths about the ambitions, integrations, and values derived from AI and its human workforces.
Horizontal versus vertical use cases
The first major insight to emerge was the difference between horizontal and vertical AI use cases.
“Chatbots, copilots—they’re interesting,” noted one executive. “But they’re not moving the needle. You can’t prove ROI by shaving five minutes off a PowerPoint.”
Generative AI has extended the reach of traditional AI in three breakthrough areas: information synthesis, content generation, and communication in human language. However, anecdotal evidence of time saving across those three areas does not and will not excite the CFO or Board. Generative AI chatbots or co-pilots have augmented teams to produce reports, analyses, slides and decks at a quicker rate, and are usually enterprise-wide. These are the horizontal AI tools. They can be—and have been—quick to adopt, but their value tends to be diffuse, often difficult to track or attribute directly to revenue or margin.
Vertical AI, by contrast, is laser-focused. It is designed for specific functions: legal contract workflows, metadata extraction, agentic customer service, risk intelligence. According to McKinsey, over 90 percent of these solutions are still in pilot mode, however. Herein lies the generative AI paradox: if businesses are imbalanced across horizontal and vertical AI use cases, value will be hard to extract.
Samantha Wessels, President EMEA, Box, confirmed this. “We’re seeing that where organisations double down on vertical use cases, that’s where 55 percent more investment translates into measurable returns.”
Data debt and digital landfills: Costs of inaction
Unstructured data like PDFs, PowerPoints, emails, notes, recordings are the rich, soft underbelly of the enterprise: vulnerable but ultimately very valuable.
“About 95 percent of enterprise content is unstructured,” Samantha shared, later in the conversation. “And most organisations have treated it like a digital landfill.”
This landfill analogy hit home for many around the table. Banks, for example, face not just sprawling data architectures but also the regulatory and cultural legacies of multiple acquisitions. UBS and Credit Suisse, for instance, are navigating complex integrations—and clashing data philosophies.
“The Swiss are conservative about data,” one executive with history at both institutions noted. “The American side, less so. You can imagine the tension then when overlaying AI strategies and MVPs.”
This is important to note. Data is not just a technology problem: it is an organisational identity problem. If leaders do not understand what data they have, who owns it, and how it moves, any AI strategy is just expensive theatre. Around the table, the message was clear: before you do AI, do data governance.
Of course, that is easier said than done. Much like the cybersecurity piece is in this complex and everchanging puzzle. For many around the table, especially those in finance and NGOs, security remains the single biggest blocker to AI adoption.
“It’s not just where the data is stored,” one leader pointed out. “It’s where the metadata is processed.”
Global banks are increasingly navigating sovereign cloud requirements, with regulators demanding local data processing by jurisdiction. One participant recounted how UBS had to build an entire Azure Stack instance inside their own data centre—an expensive workaround purely to comply with Swiss data laws. Conversations on sovereign AI abound, too. It makes for legal and ethical challenges at a time when AI value is already being disputed.
“It’s ironic,” another added. “Cloud was supposed to save us money. But for compliance? It’s just another cost centre.”
Complexity is growing, not shrinking. Each AI initiative must now thread a needle: Is it secure? Is it compliant? Is it explainable? When one adds geopolitical tensions and cross-border regulation into the mix, extracting genuine business value from AI becomes a zero sum game. Few parties want this.
Agentic AI and the democratisation of automation
Despite the risks and hurdles, the roundtable also revealed glimmers of what AI can achieve when done well—especially with the promises of agentic AI.
One leader from a multinational shared how their researchers (not engineers) were building their own agents to crawl decades of technical papers, identify trends, and accelerate research and development. It is important to draw a distinction between generative and agentic AI here. The former is responsive but reactive; the latter, far more proactive and constructive across myriad platforms and decisions, much like a human. This provides a necessary edge to new developments.
“These weren’t top-down mandates,” they said. “They were homegrown agents solving real problems on the ground.”
The reported beauty of agentic AI is that it blends human intuition with machine scalability. Agents can be designed not by senior management, but from the frontlines to fold into the work more seamlessly. We heard some are starting to deliver results faster than traditional projects ever did. But that same accessibility is a double-edged sword.
“If you build agents to support broken processes,” one executive warned, “you’re just making bad faster.”
The lesson? Empower your teams to build with clear guardrails.
Strategic vs. tactical AI
At this point in the conversation, a crucial distinction emerged, which can be tabulated into the below:
Tactical AI |
Strategic AI |
Task-oriented | Business model-oriented |
Automates existing processes | Reimagines or eliminates them |
Low investment, quick win | Higher investment, deeper return |
Example: co-writing PowerPoints | Example: full agent-led onboarding flows |
“It’s not about making a bad process faster,” one panellist quipped. “It’s about [using AI to] question whether the process should exist at all.”
A great example came from Aviva, which was clear from the outset that each new AI tool had to prove its worth. For example, using a “double helix” approach, its claims journey now switches tracks between digital and human interaction, always optimising for both business and customer outcomes. In cases involving personal injury, for example, the claim journey defaults to human interaction. This idea builds trust for both employee and customer, and saves time.
“We should use AI to create time,” someone said. “Time for humans to think, explore, imagine—the things only we can do.”
That idea—AI not as a replacement, but as a catalyst for better human work—became a recurring theme. Perhaps the most inspiring insight came from the NGO world, where constraints breed creativity.
Save the Children used AI not to optimise internal workflows but to win more funding, faster. By analysing past bid data, identifying patterns in successful proposals, and applying generative models, they increased their grant win rates. And value has a different flavour in this market, making it all the more sweeter.
“For us, ROI wasn’t revenue,” he said. “It was lives saved.”
When funding allowed, they planned to use predictive AI to combine earthquake and flood data with internal logistics, allowing them to act faster in emergencies…a literal life-saving application. The executive also disabused the notion that third sector institutions have less capital to play with than for-profits. Given a sector wide maturity in hiring strategies and slicker bid processes, many charities have similar budgets to enterprises when it comes to AI; sometimes, more so.
The point was clear: the strategic case for AI must always tie back to impact, whether that is margin, efficiency, human outcomes, or all three.
Closing thoughts
The theatre of transformation is exposed under harsh lighting now executives are under immense pressure to derive business value from AI. Leaders want to benefit from change; at the same time, teams often feel the discomfort of letting go of tested practices or of working alongside their potential displacements.
“It’s like wanting to be fit without giving up cake,” someone joked. “You can’t have both.”
A redefinition of how your business creates value is in order: leaders who succeed with AI are treating it like a business model shift rather than another software update. Key questions emerged from this roundtable that are worth considering:
- Are you investing in, and giving momentum to, vertical use cases, in balance with horizontal usage?
- Are your agentic AI solutions identifying and solving problems or just being integrated into and onto broken workflows?
- Has your unstructured data been put through to good use so that you can innovate with clarity and confidence?
When it comes to the cycles of innovation, one quote hits home: “History doesn’t repeat, but it rhymes,” quoting Mark Twain. AI may feel new, but when it comes to implementing change at both the wide and deep scale, we have been here before: with the internet, with cloud, with CRM. The difference now? The scale, speed, and stakes.
Extracting genuine business value is not from those who invest the most, but to those who invest the most strategically.
Infinite Intelligence
To stay up-to-date with how tech leaders are navigating the opportunities and challenges of AI Adoption, join the Infinite Intelligence community, a peer-to-peer community championing best practices and ethical usage.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.