
The Impact Assessment for Human Rights in the Use of Algorithms (IAMA) received an update. Developed in 2021, this tool helps organisations assess how algorithms might affect human rights. The updated version better reflects practical experiences within government and aligns with the AI Act.
User feedback and lessons from real-world applications have shaped this update. While the IAMA has already proven valuable in guiding discussions about human rights, further refinements will make it even more practical and effective.
Key improvements
To make the IAMA more user-friendly, the tool has been simplified and complemented with a separate guidance document. In addition, the tool now aligns with Article 27 of the AI Act, which requires a fundamental rights impact assessment for high-risk AI systems. The IAMA offers governments a practical resource to fulfil this obligation, with questions directly related to the AI Act clearly marked.
Tool for dialogue and decision-making
The IAMA is designed to facilitate dialogue and support decision-making, not to serve as a simple checklist. It applies to high-risk AI systems and impactful algorithms. It is part of the broader Algorithmic Framework for the Responsible Use of Algorithms by the government (Dutch). This framework helps ensure informed decisions and accountability.
Utrecht University developed and updated the IAMA on behalf of the Dutch Ministry of the Interior and Kingdom Relations (BZK). The latest version is available on the Dutch government website (Dutch).
Get started on the IAMA
Over the coming months, the Ministry of BZK plans to connect with government organisations to gather ideas on how best to support the rollout of the IAMA. Your input is very valuable to us; please get in touch and share your thoughts and feedback at algoritmes@minbzk.nl.



