Rewiring enterprises for agentic AI innovation

Preparing for agentic AI innovation

 

As organisations move closer to agentic AI systems that proactively act, support, build, and learn, technology and data leaders are asking a thorny question: How do we prepare our people and structures for intelligent autonomy?

 

That was the focus of a recent Food for Thought roundtable hosted by Genpact and HotTopics, where senior technology leaders from across industries shared how agentic AI is poised to reshape decision-making, operations and leadership. What emerged was a picture of ambition colliding with real-world nuance and complexity. Palpable in the room was an urgency to get ahead of future risks and an excitement at the potential to transform and grow. The tension between those feelings can be best defined by what steps leaders make to prioritise trust, governance, and the role of humans in an agentic loop.

 

The roundtable debate looked to critique those steps. Below is a summary article of that debate.

 

1. Trust is the entry point, not a given

 

One executive offered a framing that resonated widely:

“I treat AI like a new graduate joining the team. Would I let them make company-wide decisions? Not yet. They have to earn that right.”

 

Leaders likened current AI models to eager interns: fast, creative, and capable of learning, but still prone to mistakes and occasional "hallucinations." Trust, they agreed, is the critical hurdle—and it is not just technical. That is because most leaders still cannot trust their data sources to the extent that they would trust decision-making from them; especially an augmented or agentic intelligence. 

 

The challenge is part infrastructure, part psychology. Unlike humans, AI does not yet pause for circumstantial or contextual developments. That absence of hesitation, so often overlooked as a critical reasoning skill in humans, must be taken into account for future agentic AI solutions. It also means far closer alignment between the CIO, Chief Data Officer, HR, and whatever roles emerge from the reorganisation of agentic AI companies.

 

2. Architecture must be rebuilt rather than retrofitted

 

If trust is the human challenge, architecture is the operational one. The group discussed what it really takes to re-architect an enterprise for intelligent agents.

 

“You can’t bolt AI on top of bad plumbing,” warned one CIO. “If you do, you just make your inefficiencies faster.”

 

Standardisation, interoperability, and data readiness emerged as foundational. Some likened the future to a "factory model" for AI—modular, scalable, and outcome-driven. But others warned against chasing efficiency at the expense of value. Optimising for speed risks automating the wrong things. Leaders need to be mindful that fastness is not the north star, but better. For themselves and their peers, they had a message: intelligent architecture must prioritise outcomes, not just throughput.

 

3. Governance cannot be an afterthought

 

Innovation is outpacing oversight. Leaders voiced concern that governance is lagging behind the rapid deployment of AI agents.

“Someone plugs an AI into a process, [an] audit walks in and says, ‘Show me the policy,’ and we can’t,” one participant admitted.

 

To close the gap, some organisations have set up cross-functional AI committees or created a Chief AI Officer role to coordinate across ethics, risk, and transformation. Others, however, insisted that accountability must be collective, not siloed. Organisations have seen this before, most notably with data or security. Still, sector, region and organisational size are three factors that determine what sort of governance model leaders will adopt. That and risk appetite: ensuring leaders across the C-suite are on the same page on what risk one can legally and morally explore will smooth this journey, the room heard.

 

For Genpact’s part, governance is not a reactive safety net: it's a proactive enabler. Its frameworks are built from the ground up to provide clarity, accountability, and momentum. This is how organisations move with confidence, not caution, unlocking AI’s potential without compromising on responsibility.

 

In all, governing AI means building clarity into experimentation and setting real guardrails for autonomy.

 

4. Partner with AI, do not compete with it

 

The people factor is often the most overlooked. Training, reskilling, and culture change are essential to making agentic AI work—that is an executive responsibility.

 

Some organisations are running internal AI hackathons or mandating AI fluency training. Others are still catching up. One concern from leaders is that the industry is “...treating AI like a hobby,” one executive said. “Meanwhile, our people are using tools like ChatGPT or Gemini under the radar. That’s a risk in itself.”

 

Culture, they noted, starts at the top. When the CEO experiments with AI publicly, it signals that everyone has permission to learn, to test, and to retrain themselves. Put bluntly, a workforce that understands AI is less likely to fear it, and is essential for scaling it safely and effectively.

 

5. Responsible autonomy is not optional

 

As AI systems begin to act independently, the conversation turned to what responsible autonomy really means. Some see it as an evolution of software engineering principles:

 

“It’s still software,” said one CTO. “We just need to embed our frameworks—DevOps, assurance, ethics—into this new context.”

 

Others argued how we’re entering uncharted territory with no gold standard for governance. Everyone's experimenting with still no frameworks for measuring success, and sharing that success. Still, one consensus emerged: autonomy without alignment is dangerous. AI must be anchored to clear business missions. In theory this means AI for AI’s sake is vanity; linked to a dedicated strategy however will make it transformative.

 

“If we’re doing AI for AI’s sake, it’s just vanity. But if we tie it to strategy, it becomes transformative.”

 

Closing thoughts: Leadership at the edge of autonomy

 

Agentic AI represents yet another new era of digital transformation. The difference is the autonomy with which this digital transformation will shape. Never before has society, let alone business, had to grapple with a future shaped directly by an entity not of our species. The implications cover concepts and assumptions around trust, architecture, governance, talent, and, critically, leadership. In preparing their organisations for agentic AI, C-suite leaders are in effect preparing the whole world for a new class of intelligence.

 

Genpact's philosophy aligns paid bold innovation with responsible risk management. Smart, forward-thinking experimentation is vital, but so is an active approach to managing risk—ensuring that breakthroughs are sustainable and aligned with real business needs.

 

“Agentic AI will be like that new graduate,” one data leader concluded. “It will learn, it will err, and eventually, it will excel. Our job is to guide it wisely.”

 

This Food for Thought was made in partnership with Genpact.

 

 

Mask group-2

SUBMIT A COMMENT

We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group