The European AI Act is soon coming into effect. This regulation sets requirements for the development and use of artificial intelligence (AI) in the European Union (EU). Implementation starts in early August. It is important for your organisation to understand what lies ahead and how to prepare for its entry into force.
Objective of the AI Act
The AI Act applies to both public and private entities, outlining the responsibilities and duties of AI system developers and users throughout the EU. These rules ensure that only trustworthy AI, aligned with European public values and fundamental rights, is developed and deployed. The EU aims to strengthen public trust in AI, facilitating innovation and economic growth. The regulation adopts a risk-based approach, applying different rules for different risk categories.
Changes for your organisation
From the regulation’s activation on 1 August 2024, its provisions will be phased in, with most obligations taking effect between six months and three years. Government organisations will face obligations retrospectively over six years, and not all obligations will take effect immediately. This staged implementation gives organisations time to prepare.
Prohibited AI practices
The first category to be regulated under the new law includes prohibited AI systems, which violate European fundamental norms and values due to their unacceptable risks. These provisions will become effective six months after the regulation’s implementation. Examples include AI systems that manipulate human behaviour negatively and those used for unfair evaluation of individuals through ‘social scoring’.
High-risk AI systems
Another category under the regulation is high-risk AI, which encompasses systems offering significant opportunities and benefits but also poses considerable risks that need to be mitigated. For instance, systems used for filtering job applications or assisting government agencies with asylum or residency applications must comply with regulatory requirements two years post-implementation (by 1 August 2026). High-risk AI systems in products governed by specific EU regulations (such as elevators) have until 1 August 2027 to comply. Systems operational before the regulation’s implementation have until 2030 to meet the obligations.
Next steps for your organisation
First, determine if your AI systems fall under the prohibited practices category and phase them out before these prohibitions take effect. Then, identify any high-risk AI systems and understand the upcoming requirements for these.
Series of articles
The AI Act is an extensive law with many detailed provisions and considerations. Upcoming articles on this website and its Dutch counterpart, DigitaleOverheid.nl, will delve into the prohibitions, obligations, and nuances of the regulation. This is the first in a series aimed at exploring various aspects of this regulation.
Questions
For questions about the implementation of the AI Act and its implications for your government organisation, please contact the Ministry of the Interior and Kingdom Relations at: ai-regulation@minbzk.nl. Click here if you represent a non-governmental organisation.
AI Act timeline
1 August 2024 – AI Act comes into effect
1 February 2025 – Prohibitory provisions effective
1 August 2025 – General-purpose AI model requirements effective
1 August 2026 – Most articles, including obligations for high-risk AI, effective
1 August 2027 – Obligations for high-risk AI systems in products effective
1 August 2030 – Obligations for AI systems used by government organisations pre-implementation become effective