Presentation by Chancellor of Justice Ülle Madise at the Development Day of the Ministry of the Interior’s Area of Government at the Estonian Academy of Security Sciences, 18 September 2025
Since today is your motivation day as top experts in the field of security, I will try to avoid bland platitudes, be a bit provocative, and speak frankly. After all, isn’t it true that it is often the unpolished speech — the kind you cannot simply nod along to or shrug at, the kind that makes you argue or specify — that sparks the brightest thoughts. So I do not claim to tell the ultimate truth, only to invite you on a small thought journey to stimulate debate.
Isn’t it so that technology allows humans to be lazy, weak, malicious, and foolish. Artificial intelligence (or adum, as I call it) amplifies humans, including their laziness, weakness, malice, and foolishness.
What will happen if we ever actually reach a super-artificial intelligence that surpasses humans in all tasks, develops and governs itself, can only be speculated. Perhaps a future will come where “if there are no humans on Earth, there are no problems,” or perhaps peace, abundance, and paradise await — who knows.
In any case, humanity has already missed the chance to avoid the risks of superintelligence, because this is a game you can only win by not playing. Humanity has decided to play. Or rather, most decision-makers probably don’t even understand what game is being played or how high the stakes are. It is wise to adapt intelligently, hope for the best, and not throw the rifle in the bush.
We could face the future more securely if humans were not so lazy and foolish. Humans must rule machines, not the other way around. Perhaps it is not wise to make all our existence dependent on electricity, the internet, and machines — to voluntarily chain ourselves to them and wait to see if or who pulls the plug from the wall.
Maybe not all vents, windows, doors, and elevators need to run on electricity; not all houses need to be plastic-bag-like greenhouses cooled in summer and heated in winter, where without electricity we would be like in a sinking submarine.
Maybe it’s worth still knowing how to grow potatoes and tomatoes, preserve cucumbers, raise cows, pigs, sheep, and chickens; how to cook food, build a house, heat a stove, and navigate using an ordinary paper map.
All this, of course, alongside using modern technology, including AI — not instead of it.
In short, the first key idea: people should preserve the skills to survive like our ancestors did, while learning to use new technologies wisely and being able and willing not to submit to machines.
This requires a lot of willpower not to just go with the flow. Hopefully, human rights will not be defined by China in the future. We can already see there how, under the banner of total security and strict order, a great deal can be done with the people’s supposed consent. True, when the state sees all aspects of people’s lives, it can in principle do much good. But human nature does not favor that it actually would. Power struggles and wars will never disappear — or rather, they might, but only when the last human disappears, as Ambassador Margus Laidre has said.
Someone will always want absolute power and never intend to give it up, and part of the population — maybe even the majority in mass hysteria — hopes that strict order is good. They want others to be forced to live as they like, to be stripped of freedom and responsibility. When the iron fist squeezes their own throat, they don’t want it anymore — but then it is too late, as my great-grandmother said, “it’s too late for the mouse to yawn.”
The claim that “if you do no wrong, you have nothing to fear” is true — but only as long as the democratic rule of law protects everyone’s freedom and responsibility, fundamental rights can only be limited exceptionally, and there is legality control. Remember: human nature and society have not changed radically; the world is not ruled by universal goodness or sharp reason.
It would be convenient for the state if facial recognition and movement pattern sensors allowed immediate capture of someone who committed a serious crime. Or why only serious? A crime is a crime! Easy to think so. But there are books full of crimes, and not all are as obvious as murder, theft, or perjury.
In a surveillance society, the machine could instantly catch anyone who didn’t pay state taxes — sounds fun, right? And why stop halfway — ironically — the machine could also instantly punish those who crossed the road in the wrong place … or protested … or dared to question the party’s infallible guidelines. The road to hell is paved with good intentions, as we know.
By the way, it is not certain that the era of excessive political correctness and evidence-free public shaming (the so-called cancel culture) is over. It might not be — even Slavoj Žižek said that cancel culture is cruel and contrary to the rule of law. One can only be punished for an act that was punishable at the time it was committed, and if both the act and guilt are proven. Doubts must be interpreted in favor of the suspect.
I fear that an infallible, selfless, enlightened monarch does not exist. Power usually goes to one’s head.
So, the second key idea: it may be worth protecting people’s freedom and responsibility — even if it seems there is no sign of autocracy and total surveillance and control could achieve much good. Always think what could go wrong. And whether every “good” is even worth wanting. Maybe that desire is just laziness, convenience, or curiosity.
The dream of total security is destined not to come true. No matter how watchful the eye, something can always happen. Currently, Estonia has not agreed on a “preventive state” where mass data processing ensures that even the risk of danger never arises (the extreme version of which — predictive decisions based on body language and facial microexpressions — is banned by the EU AI Act). Nor on a “proactive state” that, by analyzing your life, rushes to help unasked. At first, it may seem convenient not to worry about storing food because AI does it, but would you like it if AI decides, based on health data and dietary guidelines, what you eat, orders it, pays automatically from your account, and your only task is to eat it? Hardly. Removing decision-making freedom breeds defiance. It also breaks our humanity. Being human inevitably means that joy comes with sorrow, health with pain, victories with losses. Humans have the right to experiment — and to fail. That is the basis of innovation.
Anyway, restricting decision-making freedom or any fundamental right — including photographing someone or investigating their bank secrets — requires, under the Constitution, a clear and specific law passed by the Riigikogu. The law must clearly state that data may be collected at all, where, how, how long it is kept, who can use it for what, and what internal and external control mechanisms exist. Money for surveillance technology can only be spent after the Riigikogu has passed such a law, the President has promulgated it, and if needed, it has passed constitutional review.
The third key idea: we must constantly seek the best balance between restrictions serving security and general interests (including data collection and use) and people’s freedom and privacy. Where good can be done and where the good outweighs the risks and costs, innovative solutions must be used. For example, there will likely never be enough people or taxpayers’ money to monitor every dementia patient one-on-one in care homes. Cameras are very appropriate there. But the Constitution and psychology must always be considered.
Perhaps I sound like a firefighter who thinks the main difference between a violin and a double bass is that the double bass burns longer.
Thank you.