AI in CX: Tracking the next wave of hyper-personalisation

Personalisation once evoked the warm familiarity of a shopkeeper knowing your name. More contemporary digital personalisation can delight, too: intuitive customer journeys, memory retained online interactions, and even well-targeted advertisements, all represent a more convenient age. But then algorithms became incredibly sharp. They read your shopping purchases, your sleep cycle, and your online micro-habits, all at once. This raft of quantitative, contextual, and signal data catalysed so-called hyper-personalisation.

 

Today, generative AI in CX is set to evolve once again.

 

From retail to finance, telecoms to pharmaceuticals, executives are racing to embed generative AI in CX—across many touchpoints of the customer journey. The goal: hyper-personalisation at scale. The journey, however, is far from simple.

 

Hosted under Chatham House Rule, moderated by myself and HotTopics’ Editorial and Strategy Director, Doug Drinkwater, and in partnership with Tata Communications, this conversation brought together technology and data executives from global banks, advertising conglomerates, pharmaceutical giants, digital retailers and telecoms firms. Their task: to take stock of where AI and personalisation intersect today—and what the path forward might require of organisations and their C-suite.

 

“We used to run £1 million ad campaigns targeting a million people,” one advertising executive said. “Now we build a bespoke advert for one person, show it once, and it vanishes. But when personalisation is invisible, it’s also unaccountable.”

 

The new, hyper shape of personalisation

If one theme dominated the discussion, it was the sheer pace and direction of change. Several executives noted that the traditional model of customer segmentation—by age, income, or even persona—is rapidly collapsing in favour of more dynamic, context-aware models that blend behavioural signals with real-time data.

 

“You’re not just targeting a customer anymore,” one attendee explained. “You’re targeting a moment. Where are they? What’s the device? What are they likely to respond to emotionally, cognitively, contextually, right now?”

 

WPP’s strategy, according to one executive, increasingly hinges on fast-turnaround, hyper-local advertising—automated content tailored to not just the user profile, but also their location, device, recent activity and time of day. “It’s not about massive campaigns anymore,” they said. “It’s about not missing that one moment that leads to a sale.”

 

Yet as another panellist put it: “If the customer realises how much you know about them, they’ll feel trapped. So it has to be disguised. If it feels too accurate, it feels manipulative.”

 

Spotify’s now-infamous billboard campaign was cited more than once, especially the one that asked a user, by name, whether they were “okay” after listening to a heartbreak anthem 50 times on Valentine’s Day. The line between humour and horror, it seems, is context.

 

Netflix, too, drew attention for its AI in CX positioning. It has been experimenting with dynamic thumbnails based on user preferences, which executives saw as a harbinger of “adaptive media”. One joked, “Romeo and Juliet for me might feature sword fighting. For you? Two lovers kissing. How do we even talk about the same story anymore?”

 

Hyper-personalisation or hyper-fragmentation?

This existential tension—between tailoring and fragmentation—surfaced repeatedly. “If every brand experience is uniquely tailored,” one panellist asked, “what happens to the common culture? What happens to the shared customer journey?”

 

A telecoms executive echoed this concern, warning that hyper-personalisation risks eroding brand consistency. “If AI is dynamically generating every interaction, can we even claim a brand voice anymore? Or is the voice just the voice of the algorithm?”

 

That concern was not just limited to customer experience. Some leaders feared an erosion of shared institutional knowledge and governance. “We now have three AIs giving three different answers to the same question,” said one banking CTO. “They’re all pulling from the same data lake. And yet, we have no idea which is correct.”

 

Many agreed that “data orchestration”—the challenge of harmonising disparate sources, contexts, and data rights—remains the Achilles' heel of every AI in CX strategy. “Legacy tech[nology] can’t keep up,” said another. “We've got mortgage data in one system, card data in another. The customer thinks we’re one brand, but behind the scenes we’re a Frankenstein.”

 

Consent, compliance and cultural fault lines

Trust also featured heavily in the debate. Not technical trust; human trust. At the core of that trust lies consent.

 

“The line between insight and intrusion is thin. Customers might not know how they're being targeted—but the moment they find out, it breaks the trust,” advised Tata Communications’ Amit Mehrotra.

 

The roundtable agreed. “Consent is not universal,” another panellist added. “In the US, you can practically do anything unless someone says no. In Germany, you can do nothing unless someone says yes. So the way you build AI systems has to reflect culture, not just regulation.”

 

Several firms reported the need to rewrite their terms and conditions—and in some cases, reacquire explicit consent from customers—before deploying generative AI on historical data. One executive said their AI implementation took three months. The legal approvals? Nine.

 

Others flagged the complications around delegated authority and proxy users—particularly in private banking, where assistants and wealth managers often act on behalf of ultra-high-net-worth individuals. “Who’s the customer?” one participant asked. “The person whose name is on the account? Or the person actually interacting with your systems?”

 

The conversation also delved into the murky world of third-party data. One banking executive described how anonymised web behaviour, sold through data brokers, is used to flag risky behaviour (such as gambling or binge spending) before approving credit on Friday nights. “It’s legal,” they said. “But is it ethical?”

 

The consensus was that ethics cannot and should not be outsourced to compliance; transparency does not look like a buried quip on page 77 of a privacy policy.

 

AI in CX: “Don’t make it creepy”

Several participants stressed that customer tolerance for personalisation depends heavily on tone and context. “I want Tesco to know which side of the bed I sleep on—if I’ve told them,” said one panellist. “But I don’t want them to guess it based on biometric tracking.”

 

Creepy marketing was a frequent target of frustration. One executive quipped that certain platforms “deliberately show you bad ads, so the good ones feel better”. Another described a surreal experience of receiving targeted baby adverts—not because they were expecting, but because their neighbour downstairs was.

 

The conversation eventually turned to AI governance. Specifically, how internal systems can provide trusted insights to business leaders. One use case stood out: the “virtual CFO”. At a large European bank, executives developed an internal AI that allows senior leaders to query key financial metrics—net new money, regional performance, staff allocations—in real-time, without waiting for a BI team to generate a report.

 

“What used to take three weeks now takes three seconds,” said the CTO. “And it’s auditable. It’s not hallucinating.”

 

Several others noted the value of small, domain-specific language models (SLMs) over general-purpose AI. A pharmaceutical leader shared how they are training vertical LLMs on clinical trial data to accelerate site selection and protocol design. “The patient consent model already exists in our world,” they said. “We just need to extend it for AI.”

 

Still, challenges remain. “The minute you scale these systems, you run into identity and security issues,” one cyber executive said. “How do you authenticate an AI agent acting on your behalf? Do they get full access? Limited privileges? For how long?”

 

The idea of agentic AI—semi-autonomous agents that make decisions—prompted vigorous debate. Some viewed it as a breakthrough. Others saw it as a regulatory minefield.

 

Closing thoughts

As the conversation on AI in CX drew to a close, one participant offered a challenge. “Everyone’s trying to guess what the customer wants. Why don’t we ask them?” The room nodded. In the end, the paradox of personalisation is not about technology: it is about choice. AI can help organisations serve customers more intelligently, empathetically, and efficiently. But it can just as easily alienate, confuse, or offend.

 

Whether in B2C or B2B contexts, hyper-personalisation will indeed be accelerated by generative AI. Whether that delights warmly, or feels like a splash of cold water, depends on the agency that the customer or client feels they retain. Assumptive hyper-personalisation will likely drive away the community your brand has spent millions cultivating; pragmatic, iterative hyper-personalisation may be the sweet spot that the C-suite are looking for in this busy, fragmented market.

 

Discover more insights from HotTopics' C-suite communities.

Mask group-2

SUBMIT A COMMENT

We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group