The Role of Digitalization in Public Decision-Making
Dear colleagues, ladies and gentlemen,
I would like to thank the Luxembourg Presidency and the Council of Europe for bringing us together here. It is an honour to speak to you today about the role of digitalization in public decision-making.
AI is rapidly reshaping our societies. The opportunities are clear — technology allows us to process vast amounts of data, automate tasks, and improve efficiency. However, we must also remain aware of the risks, especially concerning the rule of law, fundamental rights, and human dignity.
My primary concern is becoming too dependent on technology. I mean it in a very broad sense: not only an over-reliance on automation in government agencies but also in our everyday lives.
If we allow ourselves to become passive users—relying on AI systems without critical human oversight—we risk creating a society where machines dictate our lives. In this case, machines would govern us, not the other way around.
In my opinion, it is important to continue training our own human brains to remain capable of thinking creatively and critically. We must encourage lifelong learning to keep these skills alive. I believe this is a precondition for protecting human rights and human dignity in the future.
Let me be clear: AI is an incredible tool, useful and valuable. I believe most of us already use AI, both for personal purposes and in our workplaces.
My colleagues at the office do, and I do as well. For instance, I recently used AI to analyze and summarize case law from the European Court of Human Rights. Just yesterday, AI helped me to understand the details of the emission trading system in the shipping industry. Incredible!
However, because AI learns from past decisions and continuously refines itself, blind trust and human laziness can amplify errors. This is why smart human oversight is essential.
I believe that in all our countries, government agencies are facing budget cuts. As a result, there is a push to automate decisions and replace civil servants with machines and AI. Automation might be a good solution, even improving the quality of government decisions, if done properly, and with respect to human rights and the rule of law. Council of Europe Framework Convention on AI and EU Regulation on AI offer good support and guidance in this regard.
In the case of automated AI-assisted decision-making, a wise, ethical and compassionate civil servant must always be ready to correct mistakes when necessary. This civil servant must be motivated, skilled, and willing to correct mistakes — both in AI itself and its decisions. This is especially crucial when it comes to human destinies and high-risk AI applications, such as social benefits, criminal charges, residence permits etc.
If a machine miscalculates, or if biases are embedded in an AI system’s training data, or if the system is built on flawed or outdated data, it must be possible to efficiently challenge any unfair decision. This is only possible if civil servants are motivated, willing, and skilled enough to study the case from scratch and change the outcome if necessary.
Our role as ombudsmen and national human rights institutions (NHRI-s) is to advocate for transparency and human oversight and to help people whose rights have been violated due to activities within the life cycle of AI systems.
Once again, governments and our institutions must use AI to make better decisions, but human responsibility must always remain at the forefront.
There are areas of special concern, including cyber security, biases embedded in AI training data, and deepfakes.
Governments and private entities gather vast amounts of personal data, and in the age of AI, this data is more vulnerable than ever. A security breach or a poorly designed system can expose sensitive information, leading to identity theft, discrimination, or even state overreach, including mass surveillance. I am concerned that our governments and private entities are not investing enough in cybersecurity, and too many people are not protecting their own data properly.
AI can also reinforce existing biases. If an AI system is trained on biased data, it can perpetuate those biases. The solution isn’t to abandon AI but to ensure it is rigorously tested for fairness and transparency.
Cases of predictive policing, where AI models disproportionately target certain communities, are already known. In some migrant camps, cameras and AI are used to predict criminal behaviour. This raises the question: what do national security interests exactly mean in the CoE Framework Convention and the EU AI Regulation?
I recall that the European Parliament has banned the use of real-time facial recognition technology in public spaces. A surveillance society is incompatible with human rights standards and, in the end, is dangerous to individuals and to free societies. This is something we, as ombudsmen and NHRI-s, should tirelessly explain.
Another growing concern is deepfake technology, which creates realistic fake content. It has been used for misinformation, identity fraud, and even to manipulate legal and political processes. Deepfakes can be used to harass, blackmail, and violate the safety and dignity of individuals, particularly children and women. This highlights the urgent need for stronger regulations and better detection tools.
The role of ombudspersons and human rights institutions is crucial. How can we fulfil this role best?
I think we should learn to understand and use AI. I would like to thank all the colleagues who participated in the ombudsmen workshop on AI in Tallinn last year. I believe this kind of cooperation is extremely valuable. Exchanging experiences on how to promote human rights in the era of generative AI, how to investigate government AI systems used for decision-making, and how to protect citizens´ rights can strengthen us all.
I think it is our role to ensure the availability of proper legal remedies for citizens whose rights have been violated by decisions supported or even made by AI. This demand is supported by the CoE Convention and the EU Regulation.
We must ensure that AI systems remain tools of justice, not instruments of control. This means advocating for transparency in AI decision-making, demanding high standards of cybersecurity, and, most importantly, keeping human responsibility at the forefront of the decision-making process.
As public decision-makers, we have a responsibility to strike a balance. We must embrace digitalization where it enhances efficiency and fairness, but we must also push back when it undermines human rights and dignity.
However, we must not focus solely on risks. AI, if used wisely, can help detect corruption, improve legal research, and make institutions more efficient. AI can also be a valuable tool for ombudspersons and human rights institutions. It can help analyze large amounts of complaints, detect patterns of injustice, and identify systemic human rights violations more efficiently. AI-driven translation and legal research tools can also improve access to justice, making it easier for people to seek help and receive fair treatment.
The key is balance: embracing technology while always ensuring it serves people.
In conclusion, our task as ombudsmen and NHRIs is not to resist technological progress, but to guide it responsibly. AI should remain a tool in the hands of informed, ethical professionals. Our future should be one where human intelligence and AI complement each other — where technology enhances justice.
Thank you.