Food for Thought: How autonomous AI is transforming decision making, execution, and enterprise value creation
An evening of future-defining ideas in a venue that defined musical history. Set at The Hit Factory, New York, the legendary studio where icons like Bruce Springsteen, Beyoncé, Mariah Carey, and Stevie Wonder recorded era defining albums, this Food for Thought event explored how the next great creative revolution is unfolding not in music, but in business.
HotTopics and Tredence, in partnership with Google Cloud, brought together business leaders from financial services, telecommunications, healthcare, advertising, and technology to assess how enterprises are preparing their teams, systems and workflows for agentic AI. Across the group, the mood was palpable: industries have moved past theoretical interest and are readying their teams for dramatic change. The question is no longer whether agentic AI will reshape operations, but how quickly organisations can adopt it safely and at scale. This was the central tension to this Food for Thought debate.
Most organisations are now shifting from experimentation to deliberate investment. Leaders described AI as a strategic capability embedded into operating models rather than a set of isolated pilots. Yet the shift is uneven as persistent constraints remain. Fragmented data estates, unclear governance models, regulatory pressure, and a workforce that is still learning how to work with autonomous systems represent the most commonly noted hurdles to AI ROI for 2026 and beyond.
That last hurdle is of particular note. Agentic AI is framed not as automation, but as a system of digital actors able to collaborate with humans, adapt to context, and perform multi-step tasks. This raises questions around oversight, accountability, process design, and cultural readiness. Despite varied sectors, a common view emerged: the success of agentic AI will depend less on new model breakthroughs and more on data maturity, governance discipline, and human adoption.
The speakers:
Enterprise readiness and use cases
As the discussion deepened, leaders detailed how they are moving from proofs of concept to operational deployments. Several firms have built internal environments that allow teams to experiment while retaining control. JP Morgan Chase shared its internal platform that supports both experimental and production workloads. BNY Mellon described its supervised “digital employees” performing defined tasks across operations—and shared its headstart because of a particularly forward-thinking CEO.
A consensus formed around a simple point: AI initiatives now must tie directly to business outcomes. Many early pilots were driven by enthusiasm rather than value. The current shift is towards platform-first thinking: common frameworks for agent onboarding, monitoring, access control, auditability, and retirement. These frameworks allow organisations to scale without reinventing governance for every new use case.
Use cases are emerging across sectors.
Financial institutions are exploring agentic systems for claims processing, surveillance of customer interactions, operational risk assessment, and disclosure management. Advertising and media firms are deploying agents to generate audiences, optimise campaigns, and create content variations. Telecoms and healthcare are using AI for network analytics, fraud detection, diagnostics support, and workflow optimisation.
Despite the range, the group agreed on a central principle: agents augment human decision-making. They operate best when embedded in transparent workflows with clear hand-offs and human review. No organisation is seeking end-to-end automation; instead, they want targeted autonomy with defined guardrails.
That segued into a necessary discussion on human adoption. Several leaders noted that technical capability often outpaces cultural readiness; when agents perform tasks reliably, teams counter-intuitively hesitate to rely on them. Training, trust-building, workflow clarity, and co-design were identified as essential ingredients, and the rediscovery of organisational change management was mentioned often.
The debate on model strategy also surfaced. While large models dominate headlines, several participants argued that the future lies in small, domain-specific models tuned to proprietary data. They are easier to govern, cheaper to operate, more aligned with internal semantics, and less likely to behave unpredictably. This approach resonated strongly with regulated institutions.
Data, knowledge, and process integration
The session repeatedly returned to the idea that successful ‘agentification’ begins with a strong data foundation. Many organisations have built data lakes, but lack the semantic coherence needed for agents to interpret and act confidently. Several leaders warned that pilot success is often misleading because pilots operate on curated data that does not reflect real operational complexity.
Tredence presented a model describing three layers: data (facts), knowledge (context), and process (action). Participants agreed that agentic AI requires all three to work in concert. Organisations are therefore investing in knowledge graphs and ontologies to provide structure and meaning to data. These frameworks allow agents to understand relationships, dependencies, and business rules, enabling more accountable decision-making.
Yet as firms move from insight to action, the distance between analytics and operations is collapsing.
Historically, analytics teams produced reports. In the agentic model, insights trigger automated workflows. This shift heightens the need for safeguards; several participants warned that poorly validated decisions can embed systemic errors at scale.
And that collective warning precipitated a shared insight: AI is fundamentally a process problem. Most failures arise not from poor models, but from unclear workflows and inconsistent processes. Organisations must re-engineer the processes they want agents to support. Without this discipline, automation amplifies chaos.
Trust, security, and regulation
Given the number of participants from regulated sectors, security and compliance received significant attention. Leaders were clear: agentic AI introduces new risks that traditional security models do not cover. These include data leakage through prompts, context manipulation that bypasses safeguards, and agents that act outside intended scope.
Several firms described a shift from perimeter security to behavioural governance. Agents must not only be restricted in what they can access, but guided in how they reason, what they ask, and which actions they may take. Banks have developed human-in-the-loop models where business owners monitor agent performance and review high-impact actions. Random audits and exception reports act as early warning signals.
Transparency was then highlighted as a strategic asset—an accurate model that cannot explain its reasoning undermines trust—meaning interpretability and traceability are becoming table stakes.
Ethical and societal considerations were also raised. Leaders acknowledged that expectations of AI are shaped by generational behaviour. Young people already engage with digital systems as natural interfaces. Future users may expect constant agentic support. Organisations must therefore invest in education, responsible use guidelines, and cultural norms.
From all this a consistent view emerged: trust is becoming a competitive differentiator. Firms that adopt transparent and ethical AI practices early will earn credibility with regulators, employees, and customers.
A strategic outlook for the C-suite
To consider industry trajectory in the face of AI is to court obsolescence. The market is moving at such pace many industry analysts are shortening their windows of impact and widening their ranges of potential outcomes. That said, it is important to consider the near future—particularly when so much is at stake.
For Tredence’s part, it frames agentic AI as the next step in the evolution from descriptive analytics to autonomous systems but that no sector will advance at the same pace. Participants agreed. Banking and healthcare will move carefully due to regulatory demands; advertising and telecoms will iterate faster. Yet the long-term direction is shared: data becomes the operating substrate, and agents become integral to day-to-day operations.
Several leaders predicted a shift from monolithic models to federated ecosystems made up of specialised agents and small models connected through secure APIs. This architecture will require strong orchestration, monitoring, and lifecycle management. Cloud providers will play a central role by offering scalable platforms, security frameworks, and model governance tools.
For Google Cloud specifically, participants highlighted advantages in data infrastructure, model tooling, and openness to co-innovation. They also emphasised that enterprises need partners who understand regulatory constraints and can help operationalise safe AI adoption. The demand is for secure, compliant, outcome-focused platforms, not abstract model capability.
Participants closed with a pragmatic view: AI hype will moderate, but the transformation will continue. Organisations that invest early in governance, process clarity, data maturity, and human adoption will scale agentic AI faster and more safely. As one executive noted, “The question isn’t your AI strategy. The question is which business process you’re transforming, and what data gives you the confidence to do it.”
In partnership with
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.
.png?width=250&height=100&name=Tredence%20(1).png)
