/AI%20deepfakes.jpg)
Deepfakes and deception: Is AI breaking digital trust?
AI deepfakes are redefining the cybersecurity battleground, eroding trust and exposing organisations to new levels of risk.
AI is reshaping the cybersecurity landscape at breakneck speed. Among the most concerning developments are AI-generated deepfakes; audio, video, images and other types of synthetic data which are so convincing they can be used to bypass traditional controls and exploit human trust.
For technology leaders, the stakes are high. Deepfakes blur the boundary between truth and manipulation, raising urgent questions about accountability, regulation, and defence strategies.
In this recent Food for Thought debate, hosted in partnership with Tata Communications, CIOs, CISOs, and cybersecurity leaders came together to debate the rise of AI-powered attacks. Their reflections highlight not only the scale of the threat, but also the cultural and governance shifts required to build resilience. This session was hosted under Chatham House Rules and attended by the following executives:
Meet the panellists
With HotTopics’ Editorial and Strategy Director Doug Drinkwater moderating the discussion, the speakers included:
- Peter Krishnan, Executive Director, JP Morgan
- Amitabh Seli, Director Data UK CDAO, Danone
- German Faraoni Heidenreich, Global IT Director, Reckitt
- Yiting Shen, Global Head of External Network, TTS, Citi
- Richard Corbridge, CIO, Segro
- John Presland, CIO, Horwich Farrelly Ltd
- Mariana Montalvao Reis, Enterprise Head of Data Governance, WPP
- Bob Compton, CIO, North, MFS UK
- Ngozi Nwosu, VP - Portfolio Governance & Reporting, Citigroup
- Raveendra Bharadwaj, Director - Head of Data transformation, HSBC
- Amit Mehrotra, VP & Head of UK & Ireland, Tata Communications
- Dean Thomson, Cybersecurity Specialist Tata Communications
AI deepfakes, Zero Trust and corporate risk: Overview
- The rise of AI-powered threats
- Deepfakes and the blurring of trust
- Cybersecurity accountability and risk in the C-suite
- Global regulation and compliance challenges
- From awareness to AI-driven response

The rise of AI-powered threats
For years, phishing and ransomware have dominated the cybercrime playbook. Attackers relied on crude but effective emails, sometimes so obviously fake they became jokes within the industry. As one participant recalled, stories circulated about China’s PLA cyber groups competing to write the worst phishing emails possible, “Nigerian prince” style, just to see who could still trick a victim.
Today, the game has changed. Generative AI has supercharged the average attackers’ ability to mimic language, tone and even entire personas. Ransomware-as-a-service has lowered the barrier to entry, allowing criminals to buy more sophisticated tools off the shelf. Meanwhile, deepfakes have added a new dimension.
Instead of simply sending a suspicious email, attackers can now place a convincing video call or voicemail that looks and sounds like a trusted colleague or executive, such as the CFO or CEO. Last year an Arup employee was duped into transferring £20 million to a cyber criminal posing as a senior official. That same year, WPP’s then-CEO Mark Read, was the victim of another AI deepfake scam, with the attackers impersonating him to extract money and personal details from another senior official.
The impact is profound. What once required technical expertise now demands only creativity and a credit card. Cybercrime has become faster, cheaper, and more scalable, turning every employee, from the boardroom to the front desk, into a potential entry point to exploit.
Amit Mehrotra, Tata Communications, stated:
“Cyber incidents today don’t stop at the IT team. Once they happen, the National Crime Agency and external experts are involved, and the fallout ripples across the whole organisation, not just the CIO or CISO.”
Deepfakes and the blurring of trust
The danger of deepfakes lies not just in being technically advanced but in their ability to weaponise trust. An email scam might trigger suspicion, but what if a video call appears to come directly from the CEO, instructing finance to authorise a transfer?
Senior technology leaders at the session noted how deepfakes are breaking down one of the last defences in cybersecurity: human intuition. Employees have been trained for years to “think before you click,” but few are prepared for manipulated media which can mimic colleagues with eerie precision.
One executive suggested that organisations should extend phishing simulations to include “deepfake phishing,” controlled tests where employees must identify AI-generated video or audio. While it is controversial, the idea reflects a growing consensus: awareness training needs to evolve alongside the threat.
At the same time, not all deepfake incidents are financially motivated. For some, brand damage and notoriety are the goals. Just as hackers once defaced websites for bragging rights, today’s attackers may target organisations simply to make headlines.
Cybersecurity accountability and risk in the C-suite
Perhaps the most contentious issue is accountability. When a deepfake-led attack succeeds, who takes the fall? Historically, blame has fallen on CIOs and CISOs. But as one participant put it, “It’s too easy to replace the CIO, buy another one, and move on.”
The reality is more complex. Cyber risk is enterprise risk. A breach doesn’t just impact IT systems, it also affects reputation, regulation, and shareholder value. Yet CISOs still face immense pressure, and their average tenure is around 16 to 18 months, a symptom of a blame culture that associates compromise with failure.
Leaders argued for separating accountability from scapegoating. Yes, someone must be responsible, but firing security leaders after every incident is unsustainable. Instead, accountability should be shared across the executive team and board. Cybersecurity literacy at the board level was identified as critical. Not just engagement, but a deeper understanding of threats, controls, and the inevitability of cyber attacks.
Global regulation and compliance challenges
As AI accelerates these threats, governments are scrambling to regulate. The challenge for multinational firms is complexity: different jurisdictions are introducing overlapping, sometimes conflicting, rules on AI and cybersecurity.
The EU has taken a leading role in this by setting out two landmark regulations: The AI Act and DORA (Digital Operational Resilience Act). The AI Act introduced a more risk-based framework, banning the use of AI in activities like social scoring and manipulative AI deepfakes, while also imposing stricter controls on high-risk systems in areas like education and employment. DORA, on the other hand, focuses more on the financial sector, mandating ICT risk management and resilience testing.
One CIO described managing operations across 119 countries and four war zones. The solution was to apply UK legislation as a baseline, unless local laws demanded stricter measures. This ensured a consistent compliance footprint, though it sometimes made the company less competitive in markets with weaker standards.
Case studies highlight the stakes. In one incident, a leader recalled, 600,000 customer records appeared for sale on the dark web after a business unit purchased an unsanctioned SaaS app with their corporate credit card. Regulators spared the company a fine, citing the strong security culture and improvements made in prior years. The lesson here was this: compliance frameworks matter, but so does demonstrable commitment to security.
Participants agreed that adopting the “strictest regulation first” approach is often safest. It minimises legal exposure while signalling seriousness to regulators. But the speed of AI regulation (particularly around deepfakes) makes staying ahead of requirements a constant challenge.
From awareness to AI-driven response
Traditional “prevent, detect, respond” frameworks remain relevant but must evolve. Attacks are no longer purely human-driven; they unfold at machine speed. Locking down networks for days while investigations unfold is no longer feasible. “Some of these threats almost go beyond typical, traditional security mechanisms.”
Tata Communications’ Dean Thomson reflected:
“What used to take a single hacker in a room, maybe a day or a week, to come up with some sort of sophisticated attack, he's now able to create 100 of those in half an hour.”
Organisations are experimenting with AI-driven SOCs capable of detecting and responding in real time. At the same time, cultural defences remain essential. Phishing simulations, micro-learning modules and mandatory retraining after failures are common tactics. Some firms even remove employees who repeatedly ignore training, a sign of how seriously accountability is taken at every level.
One emerging idea is to simulate deepfake attacks internally, not to scare employees, but to normalise suspicion and build resilience. Others highlighted “reverse mentoring” programs, where younger employees coach senior leaders on emerging digital risks, helping executives stay informed.
Conclusion
AI deepfakes represent more than a new cyber tactic; they symbolise a shift in the trust economy. For organisations, the question is no longer if they will be targeted, but when (and how) they will respond.
Mehrotra stated:
“There was a very interesting use case. If there's an oil rig that catches fire in the North Sea, I would want to have information in real time…that was a challenge 10 years back. And today, everything is real-time.”
Blame and fear are poor strategies, and CISOs are the ones who take the hit. Resilience requires collaboration: CIOs, CISOs, boards, regulators, and employees all share this responsibility. Regulations will evolve, but culture and accountability must evolve faster.
As one leader reflected, cyber spending continues to soar, yet at the same time incidents are increasing. The task ahead is not simply to spend more, but to spend smarter: investing in people, processes, and AI-driven defences that can match the pace of cyber threats.
Thomson argued:
“We must be able to prevent and detect in real-time. And there's an evolution that needs to happen in the business and in the SOC to help do that, and it's going to lean on using AI tools and technology for good, for bad.”
The message here is clear. Deepfakes may blur reality, but accountability cannot. Building resilience in the age of AI deepfakes and corporate cybersecurity risk demands vigilance, education, and above all, shared responsibility for all.
The Hyperconnected Business:
HotTopics and Tata Communications are proud to partner on the Hyperconnected Business initiative, a tailored community for CIOs, CTOs, and senior executives to explore modern digital infrastructure. Discover more about how we are redefining business with modern digital solutions.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.