The Ombudsman as a guardian of human rights in the world of Generative Artificial Intelligence
Dear colleagues, ladies and gentlemen,
I would like to thank our esteemed colleague, Marino Fardelli, for inviting me to this wonderful event. I wish you, dear Marino and all your colleagues continued success and fulfilment in your efforts.
I remember the “Rome Call for AI Ethics” from 2020, which emphasized an ethical approach to AI that fosters innovation while respecting human dignity. I recently came across an interview with Padre Paolo Benanti, professor of tech ethics at the Pontifical Gregorian University in Rome. In the interview, he asks and attempts to answer some of the most pressing questions: How can we preserve and protect human dignity? How should we address the rising inequality that innovation can exacerbate? And, perhaps most importantly, what does it mean to be human in the age of Generative AI?
The opportunities are clear — technology allows us to process vast amounts of data, automate tasks, and improve efficiency. However, we must also remain aware of the risks, especially concerning the rule of law, fundamental rights, and human dignity.
My primary concern is becoming too dependent on technology. I mean it in a very broad sense: not only an over-reliance on automation in government agencies but also in our everyday lives. If we allow ourselves to become passive users—relying on AI systems without critical human oversight—we risk creating a society where machines dictate our lives. In this case, machines would govern us, not vice versa.
In my opinion, it is important to continue training our own human brains to remain capable of thinking creatively and critically. I believe this is a precondition for protecting human rights and human dignity in the future.
AI is an incredible tool, useful and valuable. I believe most of us already use AI, both for personal purposes and in our workplaces. My colleagues at the office do, and I do as well. AI can be used to analyze case law from the European Court of Human Rights or even the opinions of ombudspeople from other countries. It's truly incredible!
I believe that in all our countries, government agencies are facing budget cuts. As a result, there is a push to automate decisions and replace civil servants with machines and AI. Automation might be a good solution, even improving the quality of government decisions, if done properly, and with respect to human rights and rule of law. Council of Europe Framework Convention on AI and EU Regulation on AI offer good support and guidance in this regard.
In the case of automated AI-assisted decision-making, a wise, ethical and compassionate civil servant must always be ready to correct mistakes when necessary. This civil servant must be motivated, skilled, and willing to correct mistakes—both in AI itself and its decisions. This is especially crucial when it comes to human lives and high-risk AI applications, such as determining social benefits, criminal charges, residence permits, or access to medical care.
If a machine miscalculates, or if biases are embedded in an AI system’s training data, or if the system is built on flawed or outdated data, it must be possible to efficiently challenge any unfair decision. This is only possible if civil servants are motivated, willing, and skilled enough to study the case from scratch and change the outcome if necessary.
Once again, governments and our institutions must use AI to make better decisions, but human responsibility must always remain at the forefront.
There are areas of special concern, including cyber security, biases embedded in AI training data, and deepfakes. I am concerned that our governments and private entities are not investing enough in cybersecurity, and too many people are not protecting their own data properly. Deepfake technology has been used for misinformation, identity fraud, and even to manipulate legal and political processes. Deepfakes can be used to harass, blackmail, and violate the safety and dignity of individuals, particularly children and women. This highlights the urgent need for stronger regulations and better detection tools.
The role of ombudspersons is crucial. How can we fulfil this role best?
I think we should learn to understand and use AI. Exchanging experiences on how to promote human rights in the era of generative AI, how to investigate government AI systems used for decision-making, and how to protect citizens´ rights can strengthen us all.
I think it is our role to ensure the availability of proper legal remedies for citizens whose rights have been violated by decisions supported or even made by AI.
I think we must advocate for transparency in AI decision-making, demand high standards of cybersecurity, and, most importantly, keep human responsibility at the forefront of the decision-making process.
The key is balance: embracing technology while always ensuring it serves people.
In conclusion, our task as ombudsmen is not to resist technological progress, but to guide it responsibly. Our future should be one where human intelligence and AI complement each other—where technology enhances justice.
Thank you.