
AI in the boardroom: Questions technology leaders should anticipate

Kani Talabani
AI in the Boardroom
AI is no longer just a technical discussion—it is a boardroom priority. Technology leaders must be prepared to articulate how AI drives value, mitigates risk, and aligns with business strategy.
Whether it is generative AI, predictive analytics, or machine learning, CIOs, CTOs, and CDOs are now under pressure to explain AI initiatives in terms that resonate with commercial and governance-minded board members. Boards want answers that go beyond the hype: they want to understand the strategy, risk posture, and return on investment.
As the technology matures and business use cases proliferate, the following 10 questions will become standard in the boardroom conversation. Technology leaders must be prepared to respond with clarity, evidence, and vision.
1. What is our AI strategy, and how does it tie to business outcomes?
Board members often prefer to avoid deep dives into models or toolkits, focusing instead on how AI initiatives can drive tangible business outcomes. How does AI increase revenue, reduce operational cost, or manage exposure to risk? The AI strategy must be woven into the organisation’s overall strategy. As Dianna Kennedy, CTO at Bupa, noted in a HotTopics Infinite Intelligence panel discussion on mastering AI adoption, responsibility for AI governance is increasingly shared across technology, risk, and business-enabling functions, indicating a shift toward a more federated and distributed model of AI leadership.
Read our article on strategic insights for CIOs and tech leaders.
2. Are we ahead or behind our competitors in AI adoption?
Benchmarking against peers is becoming essential. Boards want to know not only how much you have invested in AI but also how you're using it to differentiate. Are your applications innovative or derivative? According to technology executives at the Infinite Intelligence launch event, competitive positioning now hinges on how AI is integrated into the operating model, not just whether it is being used.
Check out our article on the true cost of AI adoption.
3. How are we managing AI risk and governance?
Expect scrutiny around bias, transparency, explainability, and legal exposure. Boards will expect a robust, documented framework to manage AI risk. C-suite executives at a HotTopics event emphasised that AI systems should never be the final decision-maker. Human-in-the-loop processes are critical for maintaining accountability and reducing bias. Similarly, Valtech’s VP of Global Technology, Una Verhoeven, urged companies to be selective and rigorous in evaluating AI tools, especially those that touch personal or regulated data.
Read our article on the balance between innovation and risk in AI adoption.
4. What data are we using to train AI, and do we have the rights to use it?
Data provenance is critical. Boards will ask if your organisation has the rights to the training data used and whether it complies with regulations such as GDPR, CCPA, and the EU AI Act. Understanding data lineage and usage rights will be vital to preventing future legal exposure.
For further reading, check out our article on the EU AI Act.
5. How do we ensure AI outcomes are explainable and auditable?
Black-box AI models may be impressive from a performance standpoint, but they are problematic in regulated industries. Boards will want assurance that AI decisions can be audited, explained, and defended. Increasingly, explainability is not just a technical issue, it is a governance requirement.
For further reading on AI outcomes, click here.
6. Are employees using AI tools safely and productively?
The rise of “shadow AI” (unauthorised AI tool usage by employees) is a growing concern. Boards will expect evidence that employees are enabled to use AI tools productively but within a controlled, compliant framework. At a recent HotTopics C-suite Exchange, a transformation director at a FMCG firm highlighted the need for experimentation under governance. "We really need to ensure that we have very targeted investments into AI if we really want to shift the needle." Allowing employees to explore AI tools with guardrails has uncovered valuable use cases that a top-down strategy alone might have missed.
For further reading on adopting AI responsibly, check out our article.
7. What talent or skills are we missing to scale AI?
Organisations cannot scale AI without the right capabilities. Boards will look for evidence of proactive workforce development strategies. This means not only hiring data scientists but also upskilling employees across departments. Companies are rolling out AI training across all organisational levels, from factory floors to executive teams, to demystify AI and foster comfort and adoption.
AI champions and working groups are emerging as effective models to accelerate transformation and encourage cross-functional collaboration. Kennedy also noted the emergence of new organisational structures to accommodate the infused nature of AI across business units.
For further reading on AI, talent, and the skills gap, click here.
8. What’s our policy on ethical AI use?
What AI can do is no longer the only concern. What it should do is now central to the boardroom conversation. A clear ethical framework is essential. Boards are increasingly holding companies to account for the societal and reputational implications of AI decisions, and will expect leadership to articulate a values-based approach to innovation.
Read more about the ethical risks and principles of AI here.
9. How are we protecting sensitive IP and customer data in AI workflows?
With sensitive data being used to train, fine-tune, or prompt models, the risk to IP and privacy has never been higher. Whether using internal models or third-party APIs, board members will want to know how your organisation is controlling data input, storage, and model output. Governance of AI-related data flows must be as robust as your cybersecurity protocols.
Read our article on the race for data control and economic opportunity.
10. What are we doing to keep up with the speed of change in AI?
The rate of innovation in AI is dizzying. New models and tools are emerging weekly, and boards will want to know how your team is staying current without chasing fads. As Verhoeven observed, the ecosystem is “crazy” in its pace, requiring both caution and agility. Organisations must balance experimentation with due diligence, keeping regulatory awareness and data responsibility at the forefront.
Read more about our take on what AI means for digital transformation in the public sector.
Final thoughts: Reframing the conversation
Technology leaders are increasingly expected to become translators, converting complex technical initiatives into clear business narratives. Your employees are experimenting with ChatGPT. Your CEO is asking for an AI strategy. And your board is looking to you to lead with confidence, caution, and clarity.
The boardroom is not asking if you are using AI. They are asking how, why, and what happens next. Be ready.
SUBMIT A COMMENT
RELATED ARTICLES
Join the community
To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.