The AI Act is a European law that sets standards for the development and use of artificial intelligence (AI) systems. It also provides rights to individuals interacting with these systems.
The main objective of the regulation is to ensure that AI systems used by organisations in the EU are safe and uphold fundamental rights, regardless of where they are developed. AI systems are classified into various risk categories, with corresponding regulatory requirements. The category of an AI system determines whether it faces strict, moderate, or no rules.
Organisations that develop and deploy AI will have clear guidelines on the standards their systems must meet in Europe. Organisations that use AI can rely on these systems being dependable and high quality. Individuals interacting with AI will have better protection through the rights and requirements specified in the AI Act.
For whom?
The AI Act affects developers of AI models and systems, as well as organisations using them. It specifically imposes obligations on developers and users of high-risk AI systems. Developers of AI systems that directly interact with citizens, such as chatbots and deepfakes, must improve transparency. Some AI applications are prohibited. Furthermore, requirements are set for developers of significant language models and other versatile models.
Risk categories
The AI Act differentiates between several risk categories, namely:
- Prohibited practices: AI systems used for harmful manipulation or unjust social scoring are considered to have unacceptable risks.
- High-risk: AI systems used for biometric identification and categorization, job candidate evaluation, or student admissions pose significant risks to fundamental rights, safety, and health.
- Deception risks: AI systems that generate content must disclose that the content is AI-generated or manipulated, including deepfakes.
Obligations
The AI Act also imposes obligations on the creators of large AI models. For the largest models, it is required to identify and mitigate system risks. Additionally, copyright protection must be ensured for all these models. Adequate information must also be provided to enable developers of new AI to comply with the requirements of this regulation. This ensures that everyone can trust the building blocks of secure AI systems.
Low-risk AI
Many AI systems are not regulated. For AI that does not fall into one of the above categories, there are no requirements because the risks are low or non-existent. Developers or organisations may, however, establish and adhere to voluntary codes of conduct.
Supervision
Compliance with prohibitions on AI, requirements for high-risk AI systems, and transparency obligations is monitored by national regulators. The obligations for large AI models are overseen at the European level by the European AI Office.
Entry into force
AI Act
2024
The AI Act is expected to come into force in August 2024. It will then be implemented in phases.
2025
Six months after coming into effect, the provisions concerning prohibited practices will be enforced.
One year after coming into effect, the provisions concerning general-purpose AI models will be enforced, and the regulatory oversight must be established.”
2026
Two years after coming into effect, the remainder of the regulation will be enforced, including the transparency requirements, the requirements for high-risk applications, and the sandboxes. (A sandbox is a testing environment where organisations can explore and test new and innovative products or services under the guidance of regulators.)
2030
All AI systems deployed by government agencies that were introduced to the market before the regulation came into effect must comply with the requirements for high-risk AI systems within six years of the regulation’s enactment.