Social challenges
It is not sufficiently clear to society which standards based on public values should be guaranteed for algorithms. It is unclear when algorithms are used, even if they directly impact users. Specifically for the government, it should be clear to citizens when an algorithm has been used in decision-making, especially when it has the potential to impact their situation. It is also important that these algorithms are not perceived as a “black box”. The government must set a good example by experimenting with values-driven AI applications that support societal tasks.
Results achieved by 2023
- Efforts towards the EU’s AI Act have resulted in human rights featuring more prominently in risk assessments for high-risk AI systems and supplementary measures that strengthen transparency and the rights of natural persons.
- An algorithm registry has been launched.
- A vision for generative AI is in the works and will be published in 2024.
- An initial version of the “Use of algorithms” algorithm implementation framework (algorithm framework) provides insight into the key standards.
- ELSA labs (ELSA stands for ethics, legal and societal aspects) for people-oriented AI have been launched, in which participants collaborate to produce algorithms that satisfy human rights and public values.
- An AI Validation Team has been established within (Ministry of Interior and Kingdom Affairs) BZK in which software engineers and policymakers work together to gain knowledge and experience in validating algorithms. In addition to creating tools, they make the risks and opportunities of generative AI measurable and explore what datasets are needed to test AI applications.
Goals & indicators
Goals | Indicators |
---|---|
1. We set clear requirements for the use of algorithms by government organisations to arrive at responsible and innovative generative AI applications in government. This includes the creation of a uniform algorithm framework— supported by government organisations—for the deployment of algorithms. Requirements cover topics such as roles and responsibilities regarding the review of algorithms and AI (governance), the application of a uniformly developed methodology to detect bias/discrimination, the implementation of human rights tests (e.g., IAMAs), and generally applicable procurement conditions for algorithms that the government sources from third parties. |
|
2. The government is transparent about the use of algorithms. |
|
3. We will ensure further improvement of AI and algorithm surveillance, targeting both companies and governments. |
|
4. We set requirements at the European level for technology, such as generative AI systems, to ensure that they are secure and in line with our public values. |
|
What are our forthcoming actions?
To find out the goals we are setting for the upcoming year to further regulate algorithms, see priority 3.3 actions.