CIO, CTO or CAIO: Who is Responsible for AI?

Who is responsible for AI in today's rapidly-evolving landscape? Expert panellists discussed accountability and responsibility for scaling AI initiatives in the C-suite.

 

The integration of AI into business operations is revolutionising industries globally. However, this rapid advancement has brought about significant ethical, safety and accountability challenges. In this roundtable, a panel of experts dissected the roles of CIOs, CDOs and CAIOs in guiding ethical AI deployment.

 

With Brigid Nzekwu moderating, the panellists included:

 

  • Rachel Laycock, CTO, Thoughtworks
  • Amitabh Apte, Chief Digital and Business Technology Officer, Barilla
  • Allan Cockriel, CIO / Group CISO, Shell
  • Sara Mubasshir, Head of Change (Digital, Experience, Business Analysis), London Business School

 

Who is responsible for AI? Overview

 

 

Who is Responsible for AI?

 

Who is responsible for AI? Key takeaways from the panel

 

  • “It’s a very exciting time… I think it's about getting comfortable and sharing vulnerabilities to say, yes, I understand this part, [I] understand that part [that] we are all this together.” — Sara Mubasshir, Head of Change (Digital, Experience, Business Analysis), London Business School

 

  • “We need to have a conversation more about if you take a technology that potentially be fundamentally disruptive to a lot of people's jobs… how do we help them bridge from what they're doing now to embrace the tools to live in a world that's AI co-piloted.” — Allan Cockriel, CIO / Group CISO, Shell

 

  • “I think it's the position of humility. We have been trusted very often to guard these kind conversations. So how do we take our people along the journey? How do we accept that we don't know anything at the moment?” — Amitabh Apte, Chief Digital and Business Technology Officer, Barilla

 

  • “Regardless of your role in this business this should be impacting you, you should be using tools and we've come out with a whole AI first kind of cultural shift… It is a shared accountability across the executive team, [which] does impact everyone” — Rachel Laycock, CTO, Thoughtworks

 

Bridging the AI gap: defining responsibilities

 

“When [generative AI] came along… we were wondering, ‘where does that fit in our structures?”

 

Thoughtworks CTO Rachel Laycock highlighted that AI’s evolution necessitates a new leadership role: the Chief AI Officer. Traditionally, roles like the CIO, CDO and CTO handled distinct areas—for CIOs it was legacy systems, the CDOs on data governance, and the CTO on new and emerging technologies.

 

AI has blurred the line between their responsibilities, requiring a cohesive strategy across the organisation. The CAIO was designed to bridge these functions, ensuring that AI is effectively integrated into services, systems and software development. 

 

At the London Business School, Head of Change Sara Mubasshir emphasised a distributed model of AI responsibility, focusing on collective accountability. “There is not a single person, but everybody needs to play their part.”

 

Recalling the early days of AI, she noted the rapid advancements and their implications today. According to Sara, LBS aims to ensure ethical AI use by emphasising data integrity and bias prevention. 

 

Who is responsible for AI integration across the organisation?

 

“For us it's actually been elevated to an executive committee conversation.” 

 

CIO and Group CISO Alan Cockriel explained that the organisation’s AI oversight is embedded at the executive level, with the CEO driving the AI agenda. 

 

“From a technology perspective, we want to make sure we have the right estate set up so we're able to set up LLMs at scale to allow us to have our capabilities to be able to innovate.”

 

This integration ensures that AI considerations influence high-level decision-making, reflecting its potential organisational impact. While the IT function handles technical aspects like lifecycle controls and compliance, the executive team governs strategy and investments to maintain “discipline” and “momentum”.

Amitabh Apte from Barilla discussed the company's extensive history with AI in manufacturing, logistics, and other areas. Despite being a 147-year-old family business, he noted that Barilla has been utilising AI technologies for decades.

“There’s no one group which does AI,” on the other hand, “you need to have some sort of accountability, some sort of guardrail, some sort of management, some framework.”

On that note, the company employs a cross-functional AI board to oversee initiatives, ensuring a balanced approach to AI deployment. Amitabh emphasised that AI is for everyone in the organisation, mirroring a broad integration strategy. “I think the more and more I think about it, this truly has been democratised… that's the philosophy.”

 

Challenges and opportunities in AI

 

“There's a laundry list of challenges and especially for a large enterprise that has lots of customers, those things have to be addressed.” Rachel pointed out issues like data bias, privacy concerns and the difficulty of evaluating AI outputs. 

 

Many AI use cases are still exploratory, with few reaching production due to these obstacles. “Although it's democratised in a way that everyone can access ChatGPT, I think we're a little further away from an organisation-specific large language model.”

 

Over at Shell, the challenge for Allan is “managing the balance between discipline and speed.” highlighting the need for balancing innovation with governance and compliance, especially under regulatory scrutiny. 

 

Shell's focus on strategic alignment ensures AI investments are “disciplined and purposeful”, avoiding hype-driven pitfalls while maintaining that momentum and bringing these concepts to scale.

 

“As much as we are talking about data apprenticeship, data literacy, we need to start talking about AI literacy and we need to bring people with us.” Sara stressed the importance of balancing innovation with ethical considerations—particularly when it comes to data bias. 

 

Drawing parallels to biassed oximeter readings during the COVID-19 pandemic, she underscored the necessity for rigorous checks on AI systems. “It was realised later on that they couldn't even read on people of colour. They weren't given the right data, it was a brilliant machine, but wasn't working and letting down a big portion of the population.”

 

This roundtable was made in partnership with Thoughtworks.

 

Mask group-2

SUBMIT A COMMENT

We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group