As organisations race to embed AI across their operations, one question looms large: is our data strategy strong enough to sustain AI at scale?
At a recent Food for Thought breakfast hosted by Editor Peter Stojanovic, this question sparked frank discussion and also a clear consensus: the enterprise world is still in its “walk before we crawl” stage. The debate, held under Chatham House Rule and in partnership with Hitachi Vantara, revealed the profound complexity of aligning AI, data, and business strategy, and the opportunity for those who get it right.
Data strategy and AI: Overview
- Accountability and the “black box” problem
- Guardrails
- The age of agentic
- Vertical versus horizontal
- Data is the new oil
- 2026 priorities
- The AI ski lift
Accountability and the “black box” problem
“We started working with a startup to do 24/7 monitoring of data in our sites. What’s coming in, what’s going out, across our recovery facility,” one participant explained. “The first use case was waste sampling, and it saved us loads of time. But when we went to the Environment Agency, they said, ‘We haven’t thought about this yet, this is a bit of a black box.’”
That black box—the question of who takes responsibility for AI-generated data—has become central to the modern enterprise data conversation. As another executive put it, “We got caught in this web of who takes responsibility for the data that’s being generated. Is it us, because we’re automating? Is it the supplier, because they’re providing the algorithm? Or is it the regulator, because they’re supposed to audit us?”
For many organisations, that tension lies between innovation and accountability. The speed at which AI tools can now ingest, analyse, and act upon data far outpaces most governance frameworks. “Who’s taking responsibility for the auditing process?” the same leader asked. “It feels like a bit of a black box for a lot of companies.”
Guardrails
From Hitachi’s side, Veebs Devendran, AI and High Performance Data Platforms Specialist, urged the group to look beyond the models themselves to the data that trains them. “See, if AI has to understand English, all the vocabulary and wordings are data as well, but it’s learned from the common, outside world. When AI tries to understand your customer data and then generates something from that, that’s the data you need to be worried about.”
He went on to explain that in the financial services sector, this distinction between public and private data is critical. “When the wealth manager uses AI to assist with client forms, if they find a problem in the way AI completed the form, it won’t automatically send that back to training. The engineer has to manually prepare and send it externally, and once the conversation is over, the data is wiped out.”
That deliberate separation, Devendran argued, is a safeguard against uncontrolled learning loops that could expose potentially sensitive information. “At least for the next two years, we are not going to allow AI to automatically make decisions. The relationship manager has to sign off, they are the responsible person for the ultimate decision.”
The age of agentic
This principle (assistive AI, not autonomous AI) emerged as a recurring theme.
The group reflected on the shift towards “agentic” technologies: AI systems capable of making independent decisions. While promising, the consensus around the table was clear that most enterprises are not yet ready to give up control.
“We’re focusing on assistance, making life easier,” said one executive. “But we are rapidly moving into that agentic space. There’s risk, and there’s cost. Maybe we let an agent approve expenses under £500, but not decide whether to invest several hundred million in a new site.”
As Iain Winfield, Solutions Consultant and Technical Expert at Hitachi, noted, restraint often defines resilience. “In financial services, you’re talking millions in fines every year, sometimes billions. So we wanted to be slow and careful in terms of using AI. But there’s a balance. Other companies can be faster and cheaper because they’ve embraced generative AI without safeguards. It’s that tension between compliance and competitiveness.”
Vertical versus horizontal
If compliance is the organisation’s brake, then data alignment is the steering. “You need to start both sides,” said Devendran. “Understand what the business wants to do, then create a data strategy around that. On the other side, you need the foundational data components, quality tools, reference data, all that. Ours is structured like this: horizontal for the foundation, vertical for the business. If you don’t align them, you create a master data set that doesn’t generate value.”
That vertical-horizontal integration (data as both infrastructure and enabler) reflects a growing maturity in how enterprises view data strategy. Yet many admitted their organisations still treat AI and data separately. “We’re now trying to merge the business strategy with the product strategy, with the data and AI strategies, so they’re all complementary,” one executive said. “In the past they were separate, and that’s where we got duplication and misalignment.”
Another leader described the structural divide: “The CDO sat inside IT, the CTO set up an AI office, and then they started doing their own thing. Each side was pulling for staff and budget. It became a competition within the same organisation.”
The irony here is that AI and data are interdependent. “When we engage on projects, AI teams often take data for granted, they think it’s all there, all clean, all fine. Then they hit problems,” one speaker admitted. “The best results come when data stewards and AI developers actually talk to each other.”
Is data really the new oil?
For global enterprises, data scale itself has become both an opportunity and a burden. “We have zettabytes of data from all our centres around the world: nuclear plants, train systems, everything,” said a technology leader. “How do we collect all this and make use of it? We need a data platform. It’s not here yet, but it’s coming, because data is the new oil.”
That metaphor, data as oil, whilst not unanimously liked, has evolved. Where oil was extracted and consumed, data must now be refined and reused responsibly. Some companies are even treating it as a new commodity. “There are companies creating data banks where you can store your company data and charge people to access it,” one executive shared. “It’s already happening—the North Sea example, where seismic mapping data from supermajors is being sold to new companies to help them find more efficient ways to extract oil and gas.”
On the other hand, this data-as-a-service economy brings its own ethical and environmental questions. “Data centres aren’t being priced per square foot anymore, they’re being priced on the power supply going into that area,” a participant observed. “We’re touching 120 kilowatts per rack now, liquid-cooled. It’s another huge concern, because data centres were once ‘build and forget,’ and now they need to evolve.”
Winfield added that the shift extends far beyond IT operations: “It’s not just your data strategy anymore, it’s your data centre strategy as well.” Another executive agreed: “Those empty shopping centres around the world; they’ve got the power, the space. In Edinburgh, they’re already planning to refit one into a data centre.”
The environmental dimension is tightening fast. As one participant put it, “Companies now have to report on ESG, and by 2027 they’ll need to prove how they’re mitigating energy consumption. That’s not just a CFO problem, it’s a technology one.”
2026 priorities
When asked where their next priorities lay, leaders cited everything from data governance and literacy to project selection. “We’ve given people tools, and there’s a huge wave of projects,” one said. “Now we’re trying to quantify value, to decide between projects using a single framework. We can’t do everything.”
Others focused on visibility: “We’re starting with transparency: what is everybody up to? That tells us what guardrails we need and where cross-pollination can happen.”
Data literacy emerged as a recurring imperative. “A lot of this comes down to people’s understanding and awareness,” one executive said. “Data literacy is always the first thing to drop off the priority list, but it’s probably the most important thing.”
That sentiment resonated across the room. “People don’t see the connection between AI and data because they’ve got such a poor understanding of data,” another added. “So yes, we’re focusing on that too.”
Still, the hardest challenge may be balancing business appetite with operational discipline. “Businesses want it now, they want to do it operationally,” said one executive. “But how do you mix that with putting something robust in place that doesn’t fall apart?”
Budget and data quality were top of mind. “You can’t just assume you’ve got the data or that it’s clean enough to use.” Another chimed in, “You’ve got to ask: what data sets are we working with? Do we have data stewards who actually understand AI?”
The AI ski lift
The analogy that stayed with everyone came from a doctor Winfield had met: “She said AI right now is like skiing before ski lifts had safety rails, you could go up the mountain, but you might fall off on the way. Eventually, we built guardrails. That’s what’s happening with AI, we’re building the safety mechanisms while already using the system.”
It seems like a fitting image for this era of acceleration and caution around AI. The companies that will thrive are not those that move fastest, but those that learn to balance that experimentation aspect with structure, aligning their AI data strategy to both business ambition and societal responsibility.
This Food for Thought was created in partnership with Hitachi Vantara.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.