
The AI Act establishes Europe’s first comprehensive legal framework for AI, covering all AI applications. To assist governments, the ‘AI Act Decision-Making Tool’ was introduced. This tool helps determine whether government agencies’ AI applications fall under the AI Act and outlines the necessary requirements.
The AI Act in short
The European AI Act came into effect on 1 August 2024. The Act defines the rights and duties of AI system developers and deployers within the European Union. These rules aim to promote the development and use of trustworthy AI. The intention is to boost citizens’ trust in AI, supporting innovation and economic growth. By creating risk categories (Prohibited AI, High-risk AI, and Deception risk), the EU seeks to achieve this. Some AI practices will be banned from 2 February 2025. Other AI systems must comply with the AI Act from August 2026 or August 2030.
Knowledge resources
This law also applies to government organisations in the Netherlands, including executive agencies, municipalities, provinces, water authorities, and the central government. As providers and/or user administrators of AI systems, they will have to comply with the new rules imposed by this act. The Ministry of the Interior and Kingdom Relations (BZK) supports governments in complying with the AI Act. To prepare government agencies for the gradual enforcement of the AI Act, BZK has developed several knowledge-based products. These include the decision-making tool (Dutch, though linked to English-language content), various webinars (Dutch), short videos (Dutch) and featured stories.
For more information on current rules and regulations, as well as recommendations and resources for the responsible use of AI and algorithms, please see the Algorithm Framework (Dutch).
The decision-making tool is currently under development. All versions will be created using open-source technology. Suggestions are welcome at: ai-verordening@minbzk.nl.



