Proactive agents offer digital power as well as threats
What's covered in this article:
- How digital leaders select the best security approach to agentic AI
- Agentic AI security implementations
- Cost concerns as agentic AI usage increases
- Dealing with AI inaccuracy
To secure the use of agentic artificial intelligence (AI) in the enterprise, digital leaders are using a business lens to define the security approach for using the AI tools and agents used by the organisation, as well as to develop policy and manage AI inaccuracies.
Digital leaders from national and international banks, insurance providers, and financial regulators, along with zero-trust security technology provider Zscaler, discussed securing AI agents during a workshop at The Studio for Technology Leaders 2025, an invite-only C-suite event.
AI agents are not the only security concern facing the financial services sector. Technologies like AI agents require authentication methods - as do the people and business processes of organisations. Attendees noted that this need was underscored by major cybersecurity incidents in 2025 at retailers Marks & Spencer and The Co-operative Group, as well as luxury car maker Jaguar Land Rover. They shared how tabletop exercises demonstrated the sophistication of deepfake methods using AI and social engineering, as well as potential solutions such as enhanced awareness training.
Although there is a high level of attention on the risks AI poses to financial institutions, traditional advertising and QR codes—possibly generated by AI—can also be used to devastating effect. For example, there was a recent fake advertising scam in Portugal, which claimed to be a Norwegian bank. They used physical advertising and a QR code to trick people into handing over personal information. It even fooled an advertising agency.
The power and usability of AI empower both criminals and opens the door to mistakes by staff, according to the CIO of an insurance firm, who added: “AI has been democratised so quickly and at relatively low cost.”
Internally, this creates a major risk for digital leaders, as the insurance CIO highlighted. His organisation has locked down AI usage, but it would not surprise him to discover that staff had copied data and ran it through a publicly available AI model. A fellow insurance CIO said the Head of Risk at her organisation shared a presentation on the risks AI poses to the business, which was generated using AI. Let's hope for all concerned that the Head of Risk didn’t use any sensitive data in the creation of the presentation.
“The challenge we are facing is the same as it was with application security,” Christoph Schuhwerk, CISO in residence at Zscaler, said, adding, “We have a large and growing environment of AI, and it is hard to control.” Attending digital leaders described a range of approaches for securely embracing AI agents.
“We have rolled out a secure version of ChatGPT to all of our global employees that is hosted on our own tenant,” an international insurance CIO said. She has also focused on understanding the needs of different parts of the organisation and ensuring they receive the AI that best suits their needs. “Our policy doesn’t allow the use of external AI, but provides the benefits of GPT and Anthropic, as well as Microsoft Copilot,” she said.
Agentic AI has garnered significant attention throughout 2025 in enterprise technology circles.
Financial services is one sector where this technology shows significant potential. The sector’s high volumes of processes and data , as well as prevalence of repetitive tasks, play to the strengths of both generative and agentic AI. To ensure the technology is secure, our community of CIOs said they have been looking at an architectural view of the organisation, and therefore, the role of AI agents.
“Agentic AI is not a machine that is static, or dynamic like a human, but it is similar to a person in that it is working autonomously,” one CIO explained, referencing the need for architecture to evolve to meet modern requirements.
Whether a person or an agentic AI, Zscaler stressed that organisations must always remember that both are accessing organisational data and, through their actions, may be placing that information elsewhere. One CIO added: “The scalability of the damage is different with a machine.” Zscaler agreed and added that fact-checking the actions of AI agents will need to become part of the technology infrastructure of financial services firms, perhaps using data loss prevention (DLP) technology.
Finally, the European AI platform Mistral is being used by one major insurance firm; as they say, it has a better understanding of the European context.
Financial services CIOs are assessing the potential of small language models to increase the security of AI agents. One benefit, according to the digital leader of a financial services regulator, is that large language models (LLMs) provide the organisation with foundational knowledge developed by the LLM provider. In contrast, an SLM is locally developed and attuned to the specific needs of your organisation. He said it is likely organisations will need both LLMs and SLMs. However, he wonders if, as reasoning abilities of LLMs improve, SLMs will lose their importance to organisations.
Alongside security concerns, CIOs are also apprehensive about the costs associated with agentic AI. One CIO compared it to what has happened with virtualisation and the recent acquisition of VMware by Broadcom: “I’m sure we will get to the point where the use of AI agents is so much part of your processes that like Broadcom have done, you’ll get a big bill, but you cannot go backwards.” Another CIO wondered whether FinOps—the financial operations framework for managing cloud costs—will evolve to help CIOs better understand and control AI-related expenses.
Customers increasingly demand accuracy from financial service providers, making it a key differentiator that digital leaders are keenly focused on, especially as they identify gaps in the market. For example, the financial services regulator has implemented an AI bot into its book of regulations, but has to warn users that the bot may not always be accurate: “We own the bot, and even though it comes from a supplier, we are accountable, so we have to have a caveat,” he said.
Concerns about AI inaccuracy have led CIOs to worry about the misinterpretation of information and bias. As technology buyers, the CIOs are already noticing that suppliers use AI to generate tender responses, which are often so positive that it becomes challenging for them to determine whether a supplier can genuinely meet their needs.
It is clear that agentic AI has real potential for the financial services sector, but its implementation has to be done at a scale and with meticulous attention to detail to protect the business and its customers. At present, it remains unclear whether the industry has the patience to prioritise this level of diligence over rapid expansion.
SUBMIT A COMMENT
RELATED ARTICLES
Success isn’t metrics, it’s people: Leadership and life with Sarah Roberts
14 Jan 2026
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.