On 13 March 2023, the European Parliament approved the AI Act. This regulation establishes rules for the use of Artificial Intelligence (AI) within the European Union (EU). AI presents numerous opportunities and benefits, such as for healthcare and more efficient production methods. However, AI also introduces certain risks. The AI Regulation forms part of the European AI strategy and is the world’s first comprehensive law on AI.
Purpose of the AI Act
The AI Act sets forth requirements and frameworks for the development and use of AI systems by governments and market actors. Its aim is to foster innovation and economic growth while safeguarding public values, as detailed in the update to the Values-Driven Digitalisation Work Agenda.
The European Parliament prioritises ensuring that AI systems used within the EU are safe, transparent, traceable, non-discriminatory, and environmentally friendly. Furthermore, the Parliament asserts that AI systems should not be fully automated and advocates for a uniform definition of AI applicable to all future AI systems.
Rules for AI
In December 2023, a provisional agreement was in place. The emergence of AI applications, such as ChatGPT, heightened the need for stricter regulations. The AI Act’s rules prohibit certain AI systems that jeopardise citizens’ rights.
Some of the rules include:
- AI safeguards for general purposes.
- Restrictions on the use of biometric identification systems by law enforcement agencies.
- Prohibition of AI used to manipulate or exploit users’ vulnerabilities.
- The right for consumers to file complaints and receive clarifications.
Next steps
The text agreed upon must first be officially adopted by the Council of the European Union. Following this, the implementation of the AI Act will occur after 24 months. Some provisions will take effect sooner, such as the ban on AI systems posing unacceptable risks (after 6 months) and codes of conduct (after 9 months).
More information can be found on the website of the European Parliament.