
Beware AI’s hidden costs in the search for scalability and ROI
As organisations rush to deploy generative AI, hidden costs—financial, environmental and operational—are exposing barriers to scale.
The rise of artificial intelligence has captivated everyone from Big Tech and policymakers to boardrooms and everyday consumers.
And in enterprise, AI’s impact is already being felt. Approximately eight in 10 organisations are using AI in at least one business function, according to McKinsey, with the firm similarly estimating that organisations that integrate AI across their operations are 1.5 times more likely to experience double-digit revenue growth.
Meanwhile, Box’s State of AI 2025 report found that while most tech leaders are spending 10% of their IT budget on AI, 90% expect this figure to increase in the coming year.
However, as AI moves beyond early experimentation and into the enterprise mainstream, CIOs are confronting a harder reality: hidden costs, questionable ROI and challenges in scaling beyond personal benefits.
That was the recurring theme at a recent HotTopics Food for Thought event at Kensington Palace in partnership with Box, where technology leaders convened under the Chatham House Rule as part of the Infinite Intelligence community.
Rising AI costs
Since the introduction of OpenAI’s ChatGPT, organisations have spun up pilot projects in the hope of taking AI workloads to production.
Appetite for these emerging technologies has been significant, led not only by the C-Suite but by functions such as information technology, information security, operations, marketing and customer service. The advantages have been relatively clear; genAI and agentic systems offer the opportunity to improve productivity, save time, reduce costs and enhance employee and customer experience.
Infinite Intelligence community members have previously shared how their organisations are leveraging these technologies for a number of use cases, such as:
- A media and entertainment organisation leveraged GenAI to improve video production workflows- from script writing to production and commissioning new programmes
- At higher education and public sector companies, AI is automating service desk, reducing task completion time from hours to minutes
In insurance, underwriters are exploring AI applications to expedite insurance claims. - AI is helping extract metadata from unstructured data to expedite loan applications, contract analysis as well as research assistance in life sciences
- In financial services, organisations are using these technologies to improve fraud detection and risk compliance.
Despite this, as noted during this Food for Thought conversation, GenAI and agentic technologies come at significant cost - both economically and environmentally.
Rising software license costs and data storage fees are such that Gartner research finds that 90% of CIOs cite out-of-control costs as a major barrier to achieving AI success. Separately, nearly three-quarters of technology leaders said the AI boom has made their cloud bills “unmanageable,” according to a Tangoe report.
“AI is an amazing thing, but it brings costs, which people don’t think about,” said one executive in attendance.
The cost issue is exacerbated by some semblance of the re-emerging ‘build vs buy’ debate. Tech executives are aware of the benefits of working with pre-built solutions from the major LLMs and yet are simultaneously aware of the dangers of being ‘locked-in’ to one provider, not least giving an ongoing trade war and supply chain crisis, which has increased manufacturing costs with semiconductors (and, subsequently, their customers).
The alternative is organisations building and maintaining their own models, built around their unique processes and data, but with that comes high costs, lengthy development times, complex maintenance and challenging skills for already-stretched technology teams.
The consensus here is that partnering within the AI ecosystem enables organisations to ride off the coattails of innovation, particularly if they can learn from cloud computing’s similar challenge with spiralling costs.
As one such example, one executive here shared how FinOps, the principle of sharing responsibility across teams to optimise costs and drive business value, has the potential to save money by optimising cloud resources
Centralised or decentralised AI: IT’s fragile positioning
During this Food for Thought, executives discussed the continuing shift from IT as a back-office function to a business partner, emphasising the need for decentralised delivery within business units for better engagement and accountability too.
There was a recognition that information technology departments need to stay close to the wider business goals and objectives, and to change how they’re internally perceived if they are to empower and govern AI adoption in equal measure.
“We have a stigma in the IT organisation. We are the guys of helpdesk, we help fix your laptop,” said an executive from media and entertainment, with another referencing the need for tech functions to set the necessary governance – frameworks and policies – to enable safe and responsible usage.
“As a globally positioned IT department, we can scale quickly and make exciting use cases available to everyone in a maintainable and affordable way,” said the speaker.
This brings about the question of strategy versus delivery; does IT delivery need to sit in the C-Suite, or can it simply be sponsored at the top-table? One leader here would use his legal team to go and speak to the business and act as a product owner, before interfacing back into IT.
AI sustainability’s impact
On the flip side of AI’s rise is the amount of compute power and data storage required to power LLMs.
For example, when comparing electricity demand, a single Google search takes 0.3 watt-hours of electricity, compared to OpenAI’s ChatGPT taking 2.9 watt-hours of electricity - nearly ten times as much electricity, according to the IEA.
This places organisations at an interesting juncture; caught between the present-day objectives for shareholders and customers, and the needs and wants of future generations. And it’s for this reason why many organisations have postponed their Net Zero targets.
During the conversation here, executives raised several environmental concerns, not least that AI requires significant energy for cooling data centres. This is backed up by Goldman Sachs research, which has found that data centre power demand will grow by 160% by 2030.
Seeing the energy-intensity of these large-scale AI implementations, technology executives are responding in-turn, increasingly factoring sustainability into technology investments with technology suppliers, but also keeping an eye on future potential regulations around AI's environmental footprint.
There is hope, however, from hyperscalers moving to renewable energy sources to the fact that AI is not only becoming more energy efficient – but helping organisations to become more green too.
One such example was here that an organisation was leveraging artificial intelligence to better track automotive emissions, a more efficient and streamlined way of collecting and analysing data compared to manual, paper-based surveys.
Data quality and readiness: ‘garbage in, garbage out’
As the old computer science adage goes, ‘garbage in, garbage out’. Or, in other words, the output of your AI models is only as good as the data it has been trained on.
During the conversation here, technology executives expressed a number of challenges with the underlying data, from 80% of all enterprise data being unstructured and data often siloed and hidden across various departments - to concerns over data quality, reliability and a lack of clear standards for data management. Regulatory complexities, and the challenges around data sharing, were also cited as barriers to deriving true value from AI.
"AI is different from any other technology we've seen before - it's so dependent on production-level data,” said one executive in attendance.
There were high-level examples of this shared during the Food for Thought; from Microsoft’s racist chatbot to DPD’s AI chatbot, which quickly mimicked human language, including swearing and criticising the organisation. And it’s this reason why organisations are struggling to scale AI into production, according to analysts at IDC.
“Half of the organisations have adopted Al, but most are still in the early stages of implementation or experimentation, testing the technologies on a small scale or in specific use-cases, as they work to overcome challenges of unclear ROI, insufficient Al-ready data and a lack of in-house Al expertise,” IDC’s authors said.
Elusive AI ROI and the search for scalability
The industry clamour for GenAI and agentic must meet a degree of reality. Despite the early promise, research shows that organisations still in the earliest throes of AI adoption - and some distance from establishing a clear Return-of-Investment (ROI).
Approximately 80% of AI projects in organisations fail, according to Rand. Gartner has found that only 30% of AI projects move past the pilot stage. Meanwhile, according to an Everest Group survey, even when pilots do go well, CIOs find solutions are hard to scale, noting a lack of clarity on success metrics (73%), cost concerns (68%) and the fast-evolving technology landscape (64%).
The solution is ultimately to start small with iterative use cases, building internal champions, working councils and design authorities which can drive change, as well as strong governance frameworks.
To assess and scale use cases, speakers here described how they were using internal ‘radar’ systems to plot project value, prioritising internal productivity over customer-facing projects which are perceived as higher risk.
As the AI hype cycle matures, scalability is more than a technical challenge—it’s an organisational one. Leaders must weigh cost, clarity and control in equal measure if they are to turn early promise into enterprise value.
“Scale is dependent on a number of different things; it’s how you articulate the value to the enterprise, which is always important to get people to buy-in. And then there's the technology aspect; understanding what platform you're using, can it scale? How does it scale? How is it architected? What's the security associated with it?” previously remarked Ravi Malick, CIO at Box.
To stay up-to-date with how tech leaders are navigating the opportunities and challenges of AI Adoption, join the Infinite Intelligence community, a peer-to-peer community championing best practices and ethical usage.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.