Responsible use of algorithms requires insight into the impact on human rights. To this end, Utrecht University developed the Fundamental Rights and Algorithm Impact Assessment (FRAIA), commissioned by the Ministry of the Interior and Kingdom Relations. Together with the National ICT Gild, they conducted a pilot at 15 government organisations. The report FRAIA in Action shares the lessons learned.
Overall impression
The pilots show that FRAIA often leaves a positive impression, despite prior scepticism. Participants appreciate the different perspectives (legal, ethical, technical) and discussions it generated. The FRAIA tool is seen as a useful tool for discussing fundamental rights, data and ethics. However, participants also found the process sometimes time-consuming, some questions less relevant and struggled to involve all necessary roles.
Key recommendations
The report also makes several recommendations to improve FRAIA:
- Develop a pre-FRAIA or quick scan. Many organisations find it difficult to determine when to conduct a FRAIA. It is not necessary for every algorithm. The FRAIA is especially intended for so-called high-risk algorithms.
- Update the tool, as developments are rapid. Bring FRAIA more in line with the European AI Regulation.
- Provide good process guidance. It is important to raise the good points and ask about them to get as much out of it as possible within a limited time. The process supervisor does not have to be external, but objective.
- The length of the FRAIA brief can sometimes be daunting. But about three-quarters is explanatory. Therefore, split the manual and questions into 2 documents in a future version.
Want to read all the recommendations? A full overview is in Chapter 6 of the report.
More about the FRAIA
- Download the interview with the authors.
- Download the rapport Fundamental Rights and Algorithms Impact Assessment (FRAIA).