World-First AI Act: What C-Suite Leaders Need to Know

The European Union approved AI governance regulations this week, the world’s first framework for the technology. As it looks to ameliorate AIs risk to society, what does it mean for C-suite leaders?


AI governance clocked a major win this week when the European Parliament endorsed the EU’s proposed AI act. In what is seen as a milestone in the journey to AI regulation by observers and industry analysts, the act now requires formal approval from ministers from EU member states, before being implemented over a period of three years.


For members of the public there will be little change to daily life. For C-suite leaders tasked with leading teams, product and service launches, or even entire transformations, however, the impact is sizable. 


The AI Act: Overview


Legal precedent?

Clearly there is little legal precedent for leaders to look to. 


In the US, President Joe Biden signed an executive order on AI in October 2023, outlining what he called “the most sweeping actions ever taken to protect Americans from the potential risks of AI.” As it stands the order requires some organisations to share results of safety testing and other information with the government, among other things. Alongside the federal measure, at least seven U.S. states have also proposed bills that would regulate the technology, according to the Associated Press. Yet this is happening in parallel with the EU’s Act.


What leaders can do is look for patterns in similar, past, bills. 


The EU’s GDPR ruling that reorganised the management of its citizen’s data has forced many sectors to adhere to stricter guidelines on customer data, experiences, marketing and digital products. Businesses had to move quickly to audit their digital postures, ensure transparency with partners and potentially reorganise teams. Leaders should recall steps taken to ensure a smooth transition, and replicate where appropriate.


It has also promoted more trust for consumers and users when interacting with organisations—a factor the EU wants to replicate with AI, its act and its hundreds of millions of European members.


It is also worth remembering trust was the central theme for this year’s World Economic Forum at Davos.


AI governance and definitions

The EU’s definition of AI is specific. AI is any “machine-based system designed to operate with varying levels of autonomy”. This goes deeper than any standard definition, perhaps to cover generative AI, such as  ChatGPT. These systems can be adaptive and know “how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”. This addition means the use of  chatbots are included, too.


For leaders in the military, defence or national security sectors, or those in scientific research and innovation, the Act does not prohibit AI systems that may pose “unacceptable risks” as it proposes elsewhere.


For every other sector, “unacceptable risks” include:

  • Systems that seek to manipulate people to cause harm; 
  • Social scoring systems that classify people based on social behaviour or personality;
  • Any attempts at predictive policing; 
  • Monitoring people’s emotions at work or in schools; 
  • Biometric categorisation systems, such as retina scans, facial recognition, fingerprints, to infer things such as race, sexual orientation, political opinions or religious beliefs; 
  • Compiling facial recognition databases through scraping facial images from the internet or CCTV.


There are grey areas in its AI governance. Certain exemptions are being prepared for law enforcement in a number of specific circumstances—and they will need approval from authorities, except for in, you guessed it, further exceptional circumstances where AI systems can be deployed without prior approval.


Few C-suite leaders will be prioritising these forms of AI systems and services, but it would be prudent to remind CEOs, Boards and investors of these red lines.


Sector-specific policies

Leaders in the utilities sectors—water, gas and electricity—and those in education, employment, healthcare and banking, may find themselves in what the Act deems “high risk” systems. These areas do not face outright bans such as those found above, but will find that they are observed closely.


What does that mean? AI systems will need to be accurate, subject to risk assessments, have human oversight and also have their usage logged. EU citizens can also ask for explanations about decisions made by these AI systems that have affected them, removing the risk of the human being kept out of the loop.


This may compound what is already seen as a ‘jagged edge of AI’; similar AI solutions deployed in similar contexts enjoy contrasting results. The reasons for this are numerous, but what is clear is that the once-hidden idiosyncrasies of the workforce are being refracted and reflected within AIs output, particularly that of generative AI.


Consider the early results of generative AI proof of concept results. In a tandem study comparing team quality and AI quality, the best team members paired with the best AI solutions did not produce the best quality results. The reason? Because high trust in high quality AI solutions means less oversight from individuals and a higher chance of hallucinations missed. Only through trial and error would leaders be able to reliably confirm this counter-intuitive result in the field. 


Generative AI governance

Speaking of, generative AI is covered by provisions for what the Act calls “general-purpose” AI systems.


There will be a two-tiered approach. 


Under tier one, all model developers will need to comply with EU copyright law and provide detailed summaries of the content used to train the model. It is unclear how already-trained models will be able to comply, and some are already under legal pressure; open-source models, which are freely available to the public, unlike “closed” models like ChatGPT’s GPT-4, will be exempt from the copyright requirement.


Tier two is stricter. It is reserved for models that pose a “systemic risk” and is expected to include chatbots and image generators. The measures for this tier include reporting serious incidents caused by the models, such as death or breach of fundamental rights, and conducting “adversarial testing”, where experts attempt to bypass a model’s safeguards.


C-suite leaders should carefully brief (and be briefed by) the teams under their remit, from engineering, to data, to security, to systems and infrastructure, to ensure these tiers are known. Testing and experimentation is a welcomed exercise in many organisations but the risks of inadvertently breaching the Act are now too great to ignore, as we see below.

How much teeth does the Act have?

A new European AI Office will set AI governance standards and be the main oversight body for General Purpose AI models. Fines will range from:

  • €7.5m or 1.5 percent of a company’s total worldwide turnover–whichever is higher–for giving incorrect information to regulators;
  • to €15m or 3 percent of worldwide turnover for breaching certain provisions of the act, such as transparency obligations; 
  • to €35m, or 7 percent of turnover, for deploying or developing banned AI tools. 
  • (Proportionate fines for smaller companies and startups are planned.)


“The AI Act has nudged the future of AI in a human-centric direction, in a direction where humans are in control of the technology and where it, the technology, helps us leverage new discoveries, economic growth, societal progress and unlock human potential.”

Dragos Tudorache, EU parliamentary member and AI Act draftee.


Closing thoughts

In short, for C-suite leaders looking to keep up with the political appetite for AI governance, looking to past legislation will offer some hints as to how organisations and entire industries will need to evolve. Yet, as ever with AI, the future promises nothing: through collaboration, transparency and partnerships, leaders will need to lean more on peers and even competitors to navigate what is already shaping up to be a transformative period for society.


Discover more thought leadership by and for C-suite leaders, below.

Mask group-2


We love getting input from our communities, please feel free to share your thoughts on this article. Simply leave a comment below and one of our moderators will review
Mask group

Join the community

To join the HotTopics Community and gain access to our exclusive content, events and networking opportunities simply fill in the form below.

Mask group