On 21 May 2024, the European Council approved the AI Act, marking a significant milestone as this legislation sets a potential global standard for artificial intelligence (AI) regulation. What are the key principles of this legislation?
Purpose
The law aims to promote the development and deployment of safe and reliable AI systems within the internal market of the European Union (EU) by both private and public entities. Additionally, it aims to protect EU citizens’ fundamental rights and stimulate innovation in AI across Europe. For instance, the impact on people’s fundamental rights must be assessed before introducing a high-risk AI system.
Risk categories
The new legislation classifies AI systems according to different risk levels. AI systems posing limited risks will face minimal transparency obligations, while high-risk systems will be subject to stringent requirements and obligations for access to the EU market. There will also be a registry for high-risk AI systems. Systems posing unacceptable risks, such as behavioural manipulation, will be banned. Furthermore, the law prohibits the use of AI for police profiling.
Enforcement
To manage enforcement, several new administrative bodies will be established. These include an AI office to enforce common rules across the EU, and a scientific panel of independent experts to support enforcement efforts.
The law will come into effect 20 days after its publication in the EU Official Journal. The new regulation will become effective two years after it comes into force, except for specific provisions. Learn more about the AI Act.