How to secure your new agentic AI workforce

Securing the agentic AI workforce

 

Insights from a recent Food for Thought discussion, in partnership with Ping Identity, reveal both the promise and perils of AI adoption.

 

Artificial intelligence has the potential to reshape industries, economies and even social contracts. Yet, beneath that enthusiasm lies unease: how do organisations distinguish between LLMs, agents, and other, pre-existing automation tools already powering their business? Who is accountable when things go wrong? And what governance models are needed to ensure AI enhances, rather than undermines, trust with employees and customers alike?

 

These questions were at the centre of a recent HotTopics Food for Thought roundtable discussion, in partnership with Ping Identity. Held under the Chatham House rule, senior technology leaders from multiple sectors gathered to explore the practical realities of AI adoption, with a particular emphasis on how they enable, and protect, agents acting on their behalf.

 

Thank you to the following attendees for joining us and sharing their thought leadership:

 

  • Wendy Redshaw, CDIO, NatWest Group
  • Ashay Koparde, Vice President - Solutions Architect, Barclays
  • Srimanth Rudraraju, Group Director of Engineering, LSEG
  • Natasha Davydova, CIO, AXA
  • Maria Stefanova, Head of PMO Artificial Intelligence, IAG
  • Sharadha Khanna, Global Retail Originations and Credit Bureau Strategy, HSBC
  • Fabiana Fernandes, Senior Director, Technology Optimisation, Elsevier
  • Stuart White, Director of Software Engineering, Elsevier
  • Ian Mulvany, CTO, BMJ
  • Chris Lord, Group CDO, Babcock International

 

Securing the agentic AI workforce: Overview

 

  1. Defining agents, LLMs, and automation
  2. Governance and responsible AI
  3. Accountability and liability
  4. Identity, access and security
  5. Data: The foundation and the fault line
  6. Positive use cases
  7. Risks: Malice vs. mistakes
  8. The road ahead

 

Agentic AI workforce - Food for Thought

 

Defining agents, LLMs, and automation

 

To start with, participants wrestled with terminology. Generative AI, they agreed, is the broad umbrella under which engines like OpenAI”s GPT or Anthropic’s Claude sit. But what differentiates an LLM from an “agent”?

 

One participant proposed a useful simplification:

 

  • LLMs respond to human prompts, remaining in the loop of user control.
  • Agents embed LLMs within workflows, allowing them to act autonomously, receive instructions from systems (rather than humans), and make ongoing decisions.

 

A comparison with robotic process automation (RPA) sharpened the distinction. RPA is deterministic, executing rules as written by human programmers. Agents are, however, non-deterministic, capable of improvising pathways to achieve certain goals. 

 

This can bring identity and access management concerns, not least with agents, like humans, often seeking the fastest and most efficient way to complete a task. Several leaders noted reports of agents “bribing” one another or inventing novel strategies to complete tasks. 

 

“Think of it as very smart RPA,” said one attendee, “but like a teenage child, capable, unpredictable, and not always obedient.”

 

Governance and responsible AI

 

The risks of autonomy quickly brought governance into focus. Without strict controls, agents can be manipulated through “prompt injection” attacks, where malicious instructions are hidden inside otherwise benign documents. British programmer Simon Willison’s notion of the “lethal trifecta” (untrusted inputs, the ability to take actions, and access to sensitive data) was cited as a live concern.

 

Participants agreed that human oversight remains essential. “You wouldn’t let even the brightest graduate run without checks,” one leader said. “The same principle applies here.”

 

But oversight cannot be ad hoc. In heavily regulated industries like finance, new use cases already pass through layers of review, with explicit bias checks and multiple sign-offs. Leaders admitted that governance frameworks often lag behind the technology’s speed and accessibility. Anyone in a business unit can now deploy an LLM tool, bypassing traditional approval routes.

 

One participant captured the tension: “This isn’t unique to AI. But what’s different this time is the accessibility. It looks magical, which makes governance, and true business ownership of data, more important than ever.”

 

Accountability and liability

 

The focus shifted to accountability. Who is responsible when AI systems fail, or cause harm?

 

Participants drew parallels to the early days of cloud computing. Companies may outsource services, but customers hold their contract counterparties accountable. The same principle, they argued, should apply to AI: responsibility cannot be outsourced, even if models or infrastructure are.

 

In practice, accountability is murky. Many organisations designate “data owners” or “data stewards”, but the title often masks diffuse responsibility. 

 

“I see them more as custodians,” one technology leader said. “Ultimately, if something goes wrong, it’s still my fault.”

 

That ambiguity is magnified with synthetic data, now widely used for testing AI models. Who owns data that is artificially generated? Another question raised was: Who is liable if synthetic datasets introduce bias or flaws which cascade into business decisions?

 

The conversation returned repeatedly to the core question: when a model fails (whether through error, bias or breach) who gets fired, or even prosecuted? Until organisations resolve this question, accountability will remain a gap in AI governance.

 

Identity, access, and security

 

Agents as non-human identities

 

Senior cybersecurity leaders in the group emphasised the novel identity challenges posed by agents. Unlike more traditional rule-based bots of the past, today’s agents can spawn sub-agents, escalate privileges, and persist across networks. This requires new approaches to identity and access management (IAM).

 

Emerging practices include treating agents as non-human identities, registered in directories and tied back to individual owners. Managed agents (those deployed by the organisation) can be tracked this way. But unmanaged agents, deployed by consumers or employees outside IT oversight, present a new category of shadow AI, echoing the “shadow IT” of a decade ago.

 

Good bots, bad bots

 

Years of cybersecurity investment have focused on blocking bots - or ‘bad traffic’. Now, businesses need to permit “good agents” while defending against malicious ones. This “good bot/bad bot” problem is particularly serious for sectors like publishing, where open-access scholarly content is being scraped at scale to train foundation models, often by actors outside regulatory reach.

 

One participant described ongoing distributed denial-of-service (DDoS)–like traffic aimed at research platforms, with much of the scraping originating from abroad.

 

“It’s become an acute issue,” they said. “The infrastructure is creaking, and we don’t yet have a way through.”

 

Data: The foundation and the fault line

 

If one theme dominated the discussion, it was data. Without clean, harmonised, and well-owned data, AI simply fails.

 

A cautionary tale came from pharma: one participant recalled that Sanofi reportedly lost $22 million building an AI model on poor-quality, fragmented post-merger data. “Garbage in, garbage out,” another participant summarised.

 

Yet responsibility for data often remains split: business teams control data sources, while technology teams are accountable for outcomes. The group agreed that true progress requires pairing business and technology leaders, co-ownership rather than siloed responsibility.

 

Positive AI use cases

 

Although much of the discussion focused on risks, participants argued that they each ought to share concrete and more positive examples where AI already delivers significant value.

 

  • RFP automation: One company used agents to process security questionnaires containing thousands of questions. By training agents on past responses, they achieved first-draft outputs that cut the equivalent of a full-time week of labour per RFP. Confidence scores and human review loops ensured reliability, with response accuracy already improving through feedback.

  • Banking strategy: A financial services team built an AI model to analyse assets and recommend optimal savings rates following a Bank of England interest rate hike. The AI solution produced recommendations in two weeks that traditionally required months of manual analysis, leading to billions in cost savings and customer gains. Subsequent human analysis confirmed the AI’s recommendation to within 0.1 percentage points, demonstrating near-identical accuracy.

 

  • Identity and access management: the Ping Identity platform is built to handle non-human identities such as agents, assigning digital identities to agents, ensuring that they can be authenticated, authorised, and governed as rigorously as human users. By securing agent-to-human and agent-to-agent interactions, The Ping Identity Platform enables organisations to harness AI productivity gains while maintaining trust, compliance and data protection.

 

These examples highlight the sweet spot: AI applied to well-defined, proprietary data where accuracy can be validated and business impact is measurable.

 

Risks: Malice vs. mistakes

 

When asked whether the greatest risks stem from malicious actors or unintended behavior, most participants leaned toward the latter. 

 

“It’s mistakes,” one leader said. “Humans make mistakes, AI makes mistakes. But at scale, a single AI error can be catastrophic.”

 

Examples abounded: vending machines tricked into buying expensive tungsten, or fast-food AI systems mishandling bulk orders. Such failures may sound trivial, but at enterprise scale they translate into financial losses, reputational damage, or regulatory penalties.

 

Still, malicious use cannot be dismissed. AI lowers the barrier to entry for hacking, enabling less-skilled actors to attempt sophisticated exploits. As one participant warned, “It’s democratising hacking in the same way it democratised coding.”

 

The road ahead

 

Looking forward, the roundtable participants emphasised three imperatives:

 

  1. Security by design: AI systems must embed security and governance from inception, not as an afterthought.
  2. Accountability frameworks: Clear lines of liability are needed, modeled on existing regulatory precedents in finance, healthcare, and the lessons learnt from cloud’s adoption.
  3. Patience and pragmatism: Organisations should begin with use cases grounded in proprietary data, where outcomes are deterministic and ROI is clear, before extending into ambiguous or creative applications.

 

Despite the cautionary tone, the group remained broadly optimistic. AI, they agreed, has the potential to drive productivity, accelerate research, and unlock innovation, if harnessed responsibly. As one participant concluded:

 

“We need to be more patient and deliberate. Align use cases with strategy. Design for security. And focus on the specific, proprietary data we can trust. That’s where AI can already deliver real ROI.”

 

Conclusion

 

Like cloud, AI raises questions of outsourcing, accountability and governance. Like RPA before it, AI promises efficiency but carries risks of brittle execution. But unlike past technologies, AI’s accessibility and autonomy make it both powerful and unpredictable.

 

The challenge for organisations is not simply deploying tools, but building the frameworks (technical, legal, and cultural) that ensure AI systems operate safely, accountably, and in line with business goals.

 

As this group of leaders demonstrated, the road ahead will be as much about governance as innovation. And in the age of agents, accountability may prove the defining question.

 

Arrange an Identity for AI workshop with Ping to explore how your organisation can govern, secure, and scale its agentic workforce.

Mask group-2

SUBMIT A COMMENT

We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group