The World’s First AI Regulation Act Is Finally Here
Six crucial takeaways from ‘The EU Artificial Intelligence Act’ that will reshape the future of AI implementation in Europe
August 1st, 2024 — this is the date when the world's first comprehensive AI law, ‘The EU Artificial Intelligence Act’, came into force.
This is big!
This means that the EU will now ensure that all the AI systems deployed on the continent are safe, transparent, and traceable.
However, it also means that Europe will not have access to many state-of-the-art AI tools that other parts of the world do (starting with Llama-3 and its future multi-modal versions and even Apple Intelligence).
And yes, everyone might not be happy with these regulations.
As with all legislation, the act is quite comprehensive and information-dense.
So, here is a story where I simplify it and share with you its six most important takeaways.
1. The Act Considers Some AI Systems At Unacceptable Risk & Bans Them
All AI systems operating in the EU are divided into four groups according to the risks that they pose.
Minimal to No Risk — This group includes AI-enabled video games, spam filters, and AI used in scientific research that poses the least risk to human safety
Limited Risk—This group includes systems that pose risks due to the lack of transparency.
This includes AI chatbots or AI-generated text, audio, and video content that are now obligated to declare this.High Risk — This group consists of AI systems used in critical domains such as:
Education (for admission or evaluation)
Recruitment (CV-sorting software)
Management of vital infrastructure (such as water, gas and electricity supply)
Migration and border control systems
Credit scoring systems
Systems used in the administration of justice
Medical devices software
These systems are subject to strict obligations before they can be put on the market.
4. Unacceptable Risk—This group consists of AI systems that clearly threaten people's safety, livelihoods, and rights.
These include:
Social scoring systems
Real-time remote biometric identification in publically accessible spaces for law enforcement
Manipulative systems that circumvent free will
Systems that encourage dangerous behaviours and exploit specific groups (children, disabled persons and more)
Emotion prediction systems at workplace and educational institutions
All of these systems are completely banned.
2. The Penalties For Breaking The Rules Are Massive
The severity of penalties for non-compliance is structured into 3 tiers as follows.
Highest-tier penalties for putting forward AI systems at an unacceptable level of risk include fines for up to €35 million or 7% of global annual turnover
Mid-tier penalties for failing to meet the requirements for high-risk AI systems include fines for up to €20 million or 4% of global annual turnover
Lower-tier penalties for less severe violations (such as providing misleading information or not disclosing AI content) include fines of up to €7.5 million or 1.5% of global annual turnover
It can be clearly seen that the penalties at the lower level are also quite substantial as well!
3. It Defines ‘AI Systems’ In A Broad Evolving Sense
The definition of ‘AI Systems’ according to the act is as follows:
“AI system” — a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
It is clear that the definition focuses on functionality and autonomy rather than the complexity of the specific technology involved in defining an AI System.
This means that it covers a broad range of software today and allows the regulation to remain relevant as AI technology evolves.
4. The Act Applies To The Complete AI Value Chain
The act applies to each of these entities, regardless of their location, if they operate in the EU market.
Providers: Developers/creators of AI systems
Deployers: Users of AI systems
Importers: EU-based entities introducing non-EU AI systems
Distributors: Supply chain entities making AI systems available
5. There Are Some Important Areas Where The Act Does Not Apply
The act does not apply to Open-source AI (with obvious unacceptable risk exceptions) and systems used for purely research purposes.
This is such a great move!
However, it also does not apply to some other areas, including Military/ Defense systems and use for National security activities.
Fishy?
6. There Are Separate Strict Rules For General-Purpose AI (GPAI) Model
GPAI models are defined as follows —
“An AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”.
According to the act, all GPAI providers must:
provide detailed technical documentation of the model
help downstream users understand the capabilities and limitations of the model well
publish a summary of the data/content used to train the model
comply with the EU copyright laws
A GPAI model is considered to have a systemic risk if the cumulative amount of computation used for its training exceeds 10²⁵ floating point operations.
Such models must conduct model evaluations and adversarial testing, track and report serious incidents and ensure cybersecurity protections.
Many might disagree, but I believe that such regulations will help develop human values-aligned AI systems that complement our strengths rather than exploit our weaknesses in the long term.
What are your thoughts on this act? Let me know in the comments below.