• Jump to main content
  • Jump to main navigation
  • Jump to widget bar
  • Jump to footer
  • Newsletter
  • About Us
  • Contact
  • Nederlands

Digital Government

For Caribbean and international professionals working on government digitalisation

Logo Rijksoverheid, to the homepage

Digital Government

  • Home
  • Topics
  • All News
  • Caribbean News
Home›Topics›Artificial Intelligence (AI)›AI Act

AI Act

The AI Act is a European law that sets standards for the development and use of artificial intelligence (AI) systems. It also provides rights to individuals interacting with these systems.

The main objective of the regulation is to ensure that AI systems used by organisations in the EU are safe and uphold fundamental rights, regardless of where they are developed. AI systems are classified into various risk categories, with corresponding regulatory requirements. The category of an AI system determines whether it faces strict, moderate, or no rules.

Organisations that develop and deploy AI will have clear guidelines on the standards their systems must meet in Europe. Organisations that use AI can rely on these systems being dependable and high quality. Individuals interacting with AI will have better protection through the rights and requirements specified in the AI Act.

For whom?

The AI Act affects developers of AI models and systems, as well as organisations using them. It imposes explicit obligations on developers and users of high-risk AI systems. Developers of AI systems that directly interact with citizens, such as chatbots and deepfakes, must improve transparency. Some AI applications are prohibited. Furthermore, requirements are set for developers of significant language models and other versatile models.

Risk categories

The AI Act distinguishes among various risk categories. These are:

  1. Prohibited practices: AI systems used for harmful manipulation or unjust social scoring are considered to have unacceptable risks.
  2. High-risk: AI systems for biometric identification and categorisation, for evaluating candidates for a vacancy, for use in critical infrastructure, or for admitting students to education are considered risky for fundamental rights, safety and health. This also includes assessing whether individuals qualify for essential public services and subsidies, such as benefits. AI systems that act as safety components in EU-regulated products such as lifts and medical equipment are also considered high-risk.
  3. Deception risks: AI systems that generate content must disclose that the content is AI-generated or manipulated, including deepfakes.

Obligations

The AI Act also imposes obligations on the creators of large AI models. For the largest models, it is required to identify and mitigate system risks. Additionally, copyright protection must be ensured for all these models. Adequate information must also be provided to enable developers of new AI to comply with the requirements of this regulation. This ensures that everyone can trust the building blocks of secure AI systems.

Minimal risk AI

Many AI systems are not regulated. For AI that does not fall into one of the above categories, there are no requirements because the risks are low or non-existent. Developers or organisations may, however, establish and adhere to voluntary codes of conduct.

Supervision

Compliance with prohibitions on AI, requirements for high-risk AI systems, and transparency obligations is monitored by national regulators. The obligations for large AI models are overseen at the European level by the European AI Office.

Entry into force

AI Act

2024

August 2024

The AI Act is expected to come into force in August 2024. It will then be implemented in phases.

2025

February 2025

Six months after coming into effect, the provisions concerning prohibited practices will be enforced.

August 2025

One year after coming into effect, the provisions concerning general-purpose AI models will be enforced, and the regulatory oversight must be established.”

2026

August 2026

Two years after coming into effect, the remainder of the regulation will be enforced, including the transparency requirements, the requirements for high-risk applications, and the sandboxes. (A sandbox is a testing environment where organisations can explore and test new and innovative products or services under the guidance of regulators.)

2030

August 2030

All AI systems deployed by government agencies that were introduced to the market before the regulation came into effect must comply with the requirements for high-risk AI systems within six years of the regulation’s enactment.

Related Links

  • Netherlands AI Coalition
  • Coordinated Plan on AI by the European Commission
This field is for validation purposes and should be left unchanged.
Was this page helpful?
Your feedback is greatly appreciated.

Share this post
  •  Share via email
  •  Share on X (previously Twitter)
  •  Share on LinkedIn

Widgetruimte algemeen

Artificial Intelligence (AI)

  • Innovation with AI
  • Public Values and Human Rights
  • AI Act

Last modified on: 13 September 2025.

Related Links

  • Netherlands AI Coalition
  • Coordinated Plan on AI by the European Commission

Posts

  • Featured Stories
  • News

Got a query, thought, comment, or suggestion?

If you're working on digitalising the government and got something on your mind, please share your thoughts with us.

  • Link DigiD Help Desk digid.nl/en/help
  • Link MijnOverheid / Message Box mijn.overheid.nl/about-mijnoverheid
  • Link eHerkenning Help Desk eherkenning.nl/en/contact
  • Link Message Box for Businesses english.rvo.nl/topics/contact/form

Digital Government

For Caribbean and international professionals working on government digitalisation

Stay Connected

  • Follow us on LinkedIn
  • Follow us on Mastodon
  • Follow us on X (Twitter)
  • Sign up to our Newsletter
  • Activate our RSS Feed

Nederlands

  • Deze site in het Nederlands

About this Website

  • About Us
  • Contact
  • Archive
  • Copyright
  • Privacy Statement
  • Accessibility Statement
  • Report a Vulnerability
  • Sitemap