Keeping Pace with AI: Risk vs Reward

Explore the balance between innovation and risk in AI adoption. Technology executives gathered at The Studio to discuss the challenges and opportunities of AI, addressing data privacy, cybersecurity and regulatory compliance.

 

The rise of artificial intelligence offers humanity opportunities and challenges in equal measure—from the potential to transform industries, streamline processes and enhance decision-making, versus the dilemmas surrounding data privacy, security, bias, and societal impact.

 

Furthermore as these technologies gain wider adoption, new risks have surfaced around information governance, cyber attacks, privacy and regulatory compliance, especially for multinational corporations navigating regional or cultural nuances. Add in the fact that most data is unstructured, and you’re looking at more risks across siloed apps, multiple vendors, and complicated tech stacks. 

 

In this panel debate, technology executives and thought leaders explored keeping pace with change in the AI market, balancing innovation with risk management and how to safely scale beyond pilot projects.

 

AI risk vs reward: meet the panellists

 

With Doug Drinkwater moderating, the roundtable panellists included:

 

  • Xuyang Zhu, Senior Counsel, Taylor Wessing
  • Kirstine Dale, Chief AI Officer, Met Office
  • Kate Sargeant, Chief Data Officer, Financial Times
  • Ian Cohen, Chief Product & Information Officer, Acacium Group
  • Omar Davison, Staff Solutions Engineer, Box

 

 

AI risk vs reward: key takeaways

 

  • Xuyang Zhu addressed the perception of AI as a bubble, comparing it to the blockchain hype. She noted that since ChatGPT's emergence, AI adoption has surged across sectors, initially for internal efficiencies and now increasingly for external functions, particularly in creative media and software coding. Xuyang highlighted a shift from initial reluctance to embrace AI due to risks, to developing sustainable AI policies, emphasising that "the risk of not using AI is greater ultimately than the risks of using it." 

 

  • Omar Davison noted that AI has evolved from enhancing individual productivity, such as writing emails or understanding documents faster, to creating global operational efficiencies. He pointed out the challenge: "It's the gap in between where the maturity seems to be the most confuddled," where businesses struggle to scale individual successes to organisational levels. He emphasised that businesses need to bridge this gap to achieve broader success with AI, stating, "AI is the answer to AI's own problem."

 

  • As Chief AI Officer, Kirstine Dale emphasised that her organisation aims to "embed AI into everything that the Met Office does," improving productivity with tools and incorporating machine learning into the scientific discovery process. Kirstine highlighted the organisation’s efforts to "speed up weather forecasting, make it more accurate," and improve climate forecasts to address pressing societal challenges such as climate change and extreme weather events. She also underscored the timely arrival of AI, stating it offers "a wealth of tools and opportunities" essential for addressing these critical issues.

 

  • Kate Sargent from the Financial Times addressed the question of whether licensing deals with AI platforms like OpenAI's ChatGPT create value for the creative industry. She acknowledged the division within the publishing industry, noting that "half of them are doing deals" while the other half are suing big technology companies for using their content without permission. The FT has chosen a “progressive and innovative stance” by entering into a licensing deal with OpenAI, emphasising their preference to be "part of those conversations than outside of the room." She reiterated that this approach involves detailed legal measures to manage risks as effectively as possible.

 

  • Ian Cohen addressed the risk-reward of AI agent chaining and layers of non-deterministic results, acknowledging the potential benefits but also expressing caution. He used the analogy, "with great power comes great responsibility," to emphasise the importance of appropriate use. In the context of his healthcare services and staffing business, he sees significant benefits in using one AI to validate another's output, ensuring the correctness of the context and situation. However, he is "a little bit more sceptical" about the use of randomised agents working on each other, indicating that the value and safety depend heavily on the specific use case.
Mask group-2

SUBMIT A COMMENT

We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group