Preamble: Why a Software Engineer Is About to Talk to You About Condorcet #
There's something I need to say before anything else, because it's the reason this article exists.
I did my A-levels in science. Then I read Philosophy at the University of Urbino — one of those places where philosophy isn't a bolt-on but a way of inhabiting the world, where they teach you that questions matter more than answers and that "what's the point?" is itself a philosophical question. After that, I chose technology. I've been writing code, managing infrastructure and building digital products for twenty years. And for twenty years, with varying shades of goodwill and sarcasm, I've been told that philosophy "isn't good for anything." That in the tech world, what counts is programming languages, frameworks, deployments. That Kant and Popper are charming intellectual indulgences but fundamentally decorative — a bit like a nice print above the sofa that nobody ever looks at.
I always knew that wasn't true. I felt it every time I caught an ethical implication in a project meeting that everyone else had missed. I felt it when I read a piece of European legislation and, instead of seeing nothing but a compliance checklist, recognised the legal translation of a precise philosophical principle. I felt it when I discussed software architecture and realised that the truly important decisions weren't technical — they were decisions about what kind of world that software would help to create.
Today that feeling has hardened into certainty. A humanities education is not a complement to technical thinking. It is its indispensable ethical foundation. Without it, technology is blind. Powerful, lightning-fast, ruthlessly efficient — and blind.
We live at a moment in history when the systems we build can alter the neurological development of an entire generation, sway elections, decide who gets a mortgage and who doesn't, determine what millions of people will believe to be true tomorrow morning. In this context, knowing how to configure a Kubernetes cluster is not enough. You also need to know what Hans Jonas thought about responsibility towards the future. You need to have read Mill on the boundary between liberty and harm. You need to understand Anders's Promethean gap — and recognise it when you see it implemented in a recommendation algorithm.
This is no longer an academic question. It's not the luxury of people with time for highbrow reading. It's an existential question — in the most concrete, least rhetorical sense of the word. Existential for us as a species, because the technological decisions of this decade will define the cognitive and social conditions in which our children will live. And existential for business, because anyone who builds technology without an ethical compass isn't just running a moral risk: they're running a market risk. Europe has grasped this. The AI Act, the GDPR, the Cyber Resilience Act, the Product Liability Directive — these are the signal that the era of technology without critical thinking is over. Those who cannot read that signal — who lack the cultural tools to understand why those regulations exist and not merely how to comply with them — will fall behind. Not as punishment, but through inadequacy.
So yes: after twenty years in tech, I can say with reasonable confidence that reading Philosophy was the most important professional decision of my life. More important than any certification, any language learnt, any project delivered. Because it gave me the one thing that technology cannot give you: the ability to ask yourself whether the thing you're building ought to be built.
What follows is an attempt to explain why.
The Most Powerful Rhetorical Weapon of Our Time #
There is a word that, in contemporary public debate, functions as an all-purpose skeleton key. A word that shuts down discussions rather than opening them, that casts whoever invokes it as a champion of civilisation and whoever questions it as an obscurantist. That word is progress.
"Europe is holding back progress." "Bureaucracy is killing innovation." "Regulations are shackling the future." We hear these phrases daily — on talk shows, in X threads, in tech conference keynotes, in Bay Area venture capitalists' blog posts. They present themselves as self-evident truths, yet they conceal a premise that is never made explicit: that progress is a natural force, unidirectional, intrinsically benign, and that any obstacle in its path is by definition a harm to humanity.
But is that really the case? Or are we confusing progress with something far more ordinary and far less noble?
What Progress Is, and What It Has Been #
To answer that, we first need to understand where the very idea of progress comes from. It is not an eternal concept: it is a historical invention, and not a particularly ancient one.
The Enlightenment and the Birth of an Idea #
The idea that human history has a direction — that tomorrow will be better than today — is a product of the eighteenth-century European Enlightenment. Before Condorcet, Voltaire and Kant, time was perceived as cyclical (in the Greco-Roman world) or as decline from a lost golden age. The Enlightenment revolutionised this perception: human reason, systematically applied, could improve the material, moral and political conditions of the species.
Condorcet, in his Esquisse d'un tableau historique des progrès de l'esprit humain (1795), imagined humanity advancing by stages towards perfection through education, science and the abolition of prejudice. It was a powerful and, in many respects, generous vision. But it already contained a problematic seed: the idea that progress was inevitable, almost a law of nature.
Kant was subtler. He distinguished between technical progress and moral progress. In his Idea for a Universal History with a Cosmopolitan Purpose (1784), he suggested that humanity could advance towards a universal civil society — but only through conflict, toil, and above all through institutions capable of channelling what he called humanity's "unsocial sociability." For Kant, progress was not automatic: it required structures, laws, mutual constraints. It required, in a word, politics.
Positivism and the Foundational Fallacy #
It was with Auguste Comte and nineteenth-century positivism that the idea of progress became permanently welded to scientific and technological progress. Comte theorised a "law of three stages" — theological, metaphysical, positive — in which science would progressively replace every other form of knowledge, guiding humanity towards a rational order.
Here lies the root of the fallacy we still drag around today: the identification of technological advancement with improvement of the human condition. A fallacy the twentieth century should have destroyed once and for all — and which, with almost inexplicable stubbornness, continues to thrive.
The Century That Should Have Taught Us Everything #
The twentieth century was the ultimate laboratory for testing the equation "more technology = more progress." The results were unequivocal.
The same science that produced penicillin produced nerve gas. The same engineering that built bridges and aqueducts built the gas chambers — with industrial efficiency, with technical precision, with progress in methods. Nuclear fission gave humanity an extraordinary energy source and, simultaneously, the capacity for self-annihilation.
Günther Anders, in Die Antiquiertheit des Menschen (The Obsolescence of Human Beings, 1956), captured this rupture with breathtaking clarity. Anders formulated the concept of the "Promethean gap": our technical capacity to produceradically outstrips our capacity to imagine the consequences of what we produce. We can build a bomb that kills a hundred thousand people, but we cannot feel — emotionally, morally — what the death of a hundred thousand people means. We have become, Anders wrote, smaller than our products.
Hannah Arendt, analysing the Eichmann trial, revealed something more disturbing still: that the most radical evil of the twentieth century was perpetrated not by monsters but by efficient bureaucrats. The "banality of evil" is, in a sense, organisational progress applied to destruction. Eichmann did not hate the Jews: he optimised logistical processes. He was, in his own way, an innovator.
The Thought That Kills: A Thought Experiment #
Let us take a further step. Let us take it together, with the seriousness it deserves.
Imagine that tomorrow — through some convergence of neuroscience, nanotechnology and brain-computer interfaces — it became possible to kill another human being by sheer force of thought. No physical weapon. No intermediary. A sufficiently focused mental intention, and the other person dies.
Would this be technological progress?
The superficial answer is yes. It represents an advance in understanding the brain, in the capabilities of neural interfaces, in the miniaturisation of technology. It satisfies all the criteria by which we normally define technical progress: it is new, it is more powerful than what came before, it represents a frontier of knowledge.
But something within us — something deep, pre-argumentative, almost visceral — rebels against that answer. We sense that something is wrong. And that "something" is precisely what we need to examine, because it is there that the true nature of progress lies hidden.
Hans Jonas and the Ethics of Responsibility #
Hans Jonas, in The Imperative of Responsibility (1979), anticipated precisely this kind of dilemma. He was writing in an era when genetic engineering and nuclear energy were the frontiers of technology, yet his reasoning applies with striking pertinence to our thought experiment.
Jonas starts from an observation: traditional ethics is inadequate to confront modern technological power. Classical ethics — from Aristotle to Kant — presupposed that nature was substantially invulnerable to human action, and that the consequences of our actions were circumscribed in space and time. But modern technology has made these premises false: today we can alter the climate, modify the genome, and — in our experiment — abolish every barrier between intention and homicide.
From this Jonas formulates his technological categorical imperative: "Act so that the effects of your action are compatible with the permanence of genuine human life on earth." This is not a conservative imperative: it is an imperative of responsibility towards the future. The question is not "can we do it?" but "what kind of world do we create by doing it?"
In the case of the thought that kills, the answer is clear: we would create a world in which human coexistence — the foundation of every civilisation — would become impossible. There would be no safe place left, because the threat would reside in the mind itself. Every human interaction would be poisoned by terror. It would not be the end of technology: it would be the end of society.
Mill's Filter: Harm as the Boundary of Liberty #
John Stuart Mill, in On Liberty (1859), formulated the principle that should be the lodestar of every discussion about progress: individual liberty is sacred, but it ends where harm to others begins. It is the so-called "harm principle," and in its simplicity it contains enormous wisdom.
Applied to our experiment: the capacity to kill with thought is not an expansion of human freedom. It is its absolute negation. If anyone can kill anyone without intermediaries, no freedom exists at all — because freedom presupposes the security of physical existence. You cannot exercise freedom of speech, of movement, of thought if someone can annihilate you with an intention.
Mill teaches us something we ought to have tattooed on our forearms: not every expansion of individual power is progress. Some capabilities, if universally distributed, do not emancipate — they destroy. Genuine progress is not the unlimited increase of power, but the increase of power within the boundaries that make coexistence possible.
Popper: The Open Society and Its Enemies (Technological Ones Included) #
Karl Popper, in The Open Society and Its Enemies (1945), built the most robust philosophical defence of democratic institutions against every form of utopianism — including the technological variety.
Popper was deeply distrustful of grand narratives about inevitable progress. For him, progress was not a triumphal march but a process of conjectures and refutations: we try, we err, we correct. And — crucially — we can only correct if institutions exist that allow us to do so without violence. The open society is one that has mechanisms of self-correction: parliaments, courts, a free press, laws that can be amended.
When someone says that regulations "shackle progress," they are saying — consciously or not — that the mechanisms of self-correction are an obstacle. But an obstacle to what, exactly? If "progress" cannot survive democratic scrutiny, public debate, risk assessment — then perhaps it isn't progress. Perhaps it's just haste dressed up as vision.
Those Who Cry "Progress in Chains": A Taxonomy #
When, in public debate, someone complains that states or supranational organisations are "holding back progress," it is worth asking: who is speaking, and whose interests do they represent?
The Techno-Optimist in Good Faith #
There are certainly people who sincerely believe that technology is the solution to every human problem. The position is understandable — technology has solved enormous problems: infant mortality, famines, epidemics. But techno-optimism becomes dangerous when it morphs into techno-determinism: the idea that technology, left to its own devices, alwaysproduces positive outcomes. History says otherwise.
The Entrepreneur Who Mistakes Self-Interest for the Common Good #
Here we enter more delicate territory. When a Silicon Valley CEO denounces European AI regulation as an "obstacle to progress," is he really defending the future of humanity — or defending his own business model? The confusion between private interest and public good is as old as capitalism, but in the tech era it has taken on a peculiarly insidious form: it arrives wearing the clothes of the future, of innovation, of vision.
Marc Andreessen, in his Techno-Optimist Manifesto (2023), took this position to its logical conclusion: markets are the supreme mechanism of progress, regulation is a brake, and those who defend it are "decelerationists" — enemies of the future. A position that has the merit of clarity and the defect of blindness: it entirely ignores the fact that markets, without rules, do not produce progress — they produce concentration of power. Adam Smith already knew this; in The Wealth of Nations, he warned against monopolies with the same vigour with which he championed free trade.
The Ideological Libertarian #
For the radical libertarian, every form of regulation is coercion, and every coercion is evil. A philosophically consistent position — if one accepts the premises. But the premises are fragile: they presuppose that individuals operate in a social vacuum, without asymmetries of power, without externalities, without the possibility that one person's freedom destroys the freedom of many.
Robert Nozick, in Anarchy, State, and Utopia (1974), constructed the most sophisticated version of this position. But even Nozick conceded the necessity of a "minimal state" to protect fundamental rights. The question is: when technology can alter the climate, manipulate the behaviour of billions of people, or abolish every barrier between intention and homicide, is Nozick's "minimal state" still sufficient?
The Anti-Élite Populist #
Finally, there are those who deploy the rhetoric of "stifled progress" in a populist key: the bureaucratic élites of Brussels impose rules that "the people" never asked for. A position that exploits legitimate democratic frustration in order to attack institutions, yet rarely proposes credible alternatives. Because the awkward question is: if not democratic institutions, whoshould decide the limits of technology? The market? The programmers? The shareholders?
The Bomb That Has Already Gone Off: Social Media and Our Children's Brains #
So far we have been reasoning in the abstract — thought experiments, philosophical hypotheses, future scenarios. But there is no need. The weapon that destroys without firing a shot already exists. It is in our children's pockets. It has a colourful interface, cheerful notifications, and a business model that monetises human attention by converting it into data sold to advertisers.
We are talking about social media. And we must talk — with the bluntness the subject demands — about what they are doing to the brains, the psyches, and the social fabric of an entire generation.
The Submerged Iceberg #
Jonathan Haidt, social psychologist and author of The Anxious Generation (2024), has documented with surgical precision what the epidemiological data show: after more than a decade of stability or improvement, adolescent mental health plummeted in the early 2010s. Rates of depression, anxiety, self-harm and suicide soared, more than doubling on many measures. The spike coincides with the mass adoption of smartphones. Not with the 2008 financial crisis, not with terrorism, not with climate change. With smartphones.
But what we can see — the numbers that already alarm us — is only the tip of the iceberg. Consider what we know.
The US Centers for Disease Control reported that in 2023, over 40% of high school students described persistent feelings of sadness or hopelessness. Among girls, 57% displayed depressive symptoms. Nearly one in three girls reported having "seriously considered" suicide — a 60% increase in the past decade. In the United Kingdom, the NHS Mental Health Survey 2025 revealed that 25.8% of young people aged 16 to 24 are affected by a common mental health disorder, up from 18.9% in 2014. Among young women, the figure reaches 36.1%.
These are the visible data. The ones we measure because someone sought help, went to hospital, filled in a questionnaire. The submerged iceberg is made up of adolescents suffering in silence, developing subclinical disorders, losing cognitive capacity without anyone noticing. Between 70% and 80% of children with mental health disorders never receive any treatment at all.
The Brain as Battlefield #
But the most unsettling dimension of this crisis is not psychological: it is neurological. And here we enter territory that should keep anyone with children, grandchildren or students awake at night.
A study published in JAMA Pediatrics by Professor Eva Telzer's team at the University of North Carolina tracked 169 middle-school students over three years, monitoring their brain activity with functional MRI. The results: adolescents who habitually check social media show significantly different trajectories of brain development compared to those who do not. The affected areas include the amygdala — the centre of fear and emotional reactivity — and the dorsolateral prefrontal cortex, responsible for judgement, reasoning and reward evaluation.
A very recent study, published in March 2026 in NeuroImage, analysed ABCD Study data from over 7,000 American adolescents, finding that greater daily social media use is associated with reduced cortical thickness across a wide array of brain regions. We are not talking about "feeling a bit down." We are talking about structural alterations to the developing brain.
Generation Z — those born after 1995 — were the first generation to have smartphones during puberty. Their brains developed while algorithms designed to maximise engagement competed for their attention every waking moment. Haidt identifies four foundational harms: social deprivation (real relationships replaced by digital ones), sleep deprivation, attention fragmentation, and addiction.
And the research confirms it: repeated consumption of short-form video repeatedly activates the brain's reward circuitry, leading to dopamine dysregulation, reduced sustained attention, increased impulsivity and disrupted sleep patterns. In practical terms, these young people's brains are being reconfigured to function in ways that compromise cognitive control.
The Slot Machine in Your Pocket #
Here we must be explicit about a point that is too often played down: social media did not become harmful by accident. They were designed to be this way.
The psychological mechanisms that make social media compulsive are the same — identical, not "similar": identical — as those that make slot machines addictive. They are called "variable ratio reinforcement schedules": unpredictable, intermittent rewards of varying magnitude. This is the principle B.F. Skinner documented in the 1950s studying pigeons: of all possible reinforcement schedules, the variable ratio is the most powerful at maintaining behaviour — and the most resistant to extinction.
Every time you scroll through a TikTok or Instagram feed, you are pulling the arm of a fruit machine. You do not know which scroll will bring something interesting. It might be the next one. Or the one after. Or ten scrolls away. This uncertainty generates more dopaminergic activity than a predictable reward. The anticipation, the not knowing, is itself the drug.
The crucial difference: casinos are regulated. They must declare the odds. They cannot admit minors. They cannot operate without a licence. Social media platforms have none of these constraints. Natasha Schüll, author of Addiction by Design, has put it plainly: social media companies use exactly the same methods as the gambling industry to keep users online, because in the attention economy revenue is a function of time spent on the platform.
The pull-to-refresh gesture that mimics pulling a slot machine lever. The red notification badges that exploit the Zeigarnik effect — the psychological tension of incomplete tasks. Snapchat streaks that turn friendship into a gamified metric. TikTok's algorithm, which learns within hours precisely which content will keep you glued to the screen.
Chamath Palihapitiya, former vice-president of growth at Facebook, admitted it publicly: the short-term, dopamine-driven feedback loops we have created are destroying how society works. Sean Parker, Facebook's first president, declared that the objective was "to consume as much of your time and conscious attention as possible" and that the platform exploits "a vulnerability in human psychology." These are not accusations from outsiders: they are admissions by the people who built the systems.
And the internal research confirmed it: documents made public by whistleblower Frances Haugen showed that Instagram knew it was making body image issues worse for one in three teenage girls. They knew. And they chose profits.
The Difference from Conventional Weapons #
This is where the analogy with the atomic bomb from our thought experiment stops being hyperbole.
A conventional weapon kills the body. The damage is visible, immediate, documentable. The world notices. There are photographs, casualty figures, memorials. The damage activates a social response: wars are declared, treaties are signed, monuments are erected.
Social media operate on an entirely different register. The damage is invisible, gradual, cumulative and — this is the critical point — normalised. Nobody sees a neuron being reconfigured. Nobody hears the sound of a prefrontal cortex thinning. Nobody perceives sleep deprivation as an emergency when it affects a hundred million adolescents simultaneously. The damage hides behind the everyday: "it's just the phone," "all kids are like that," "we used to sit in front of the telly for hours."
But the telly was not engineered to create addiction through variable ratio reinforcement schedules. The telly did not follow you into your bedroom at three in the morning. The telly did not have an algorithm that learnt exactly which insecurities to exploit to keep you staring at the screen. The telly did not give you a real-time numerical metric of your social worth.
And then there is the question of scale. A bomb destroys a city. Social media are altering the brain development of an entire generation on a planetary scale. Not a specific group, not a geographically bounded population: anyone, anywhere in the world, between the ages of 3 and 20 who has a connected device. The World Health Organisation surveyed nearly 280,000 young adolescents across 44 countries: 11% showed signs of problematic social media use. At global scale, these are numbers that dwarf any other public health emergency.
And a bomb goes off once. Social media operate continuously — 24 hours a day, 7 days a week, 365 days a year. There is no ceasefire. There is no peace treaty. The exposure is chronic, uninterrupted, and starting ever earlier: children of 3 or 4 with tablets programmed to entertain them, 8- or 9-year-olds with their first smartphone, 11- and 12-year-olds already immersed in social platforms. Sixty-four per cent of American children aged 11–12 already use social media.
The Destruction of Families #
There is an aspect the numbers do not capture, one that concerns the connective tissue of society itself: the family.
Anyone with school-age children knows exactly what I mean. The smartphone has become the principal domestic battleground. This is not a matter of "rules" or "screen time limits" — it is structural. The device is engineered to be more attractive, more stimulating, more rewarding than any family interaction. A parent reading a bedtime story is competing with an algorithm that has analysed billions of interactions to know precisely what captures that child's attention.
It is an asymmetric war, and the family is losing it. Parents feel powerless, inadequate, perpetually behind a technology that evolves faster than their ability to understand it. Children feel controlled, misunderstood, cut off from their peers if they lack access to the platforms. The result is a daily erosion of trust, dialogue and connection — the very things that make a family a family.
Here we return to Anders and his Promethean gap: today's parents are the first generation in history to have to protect their children from a threat they can neither see nor fully comprehend. It is not like protecting a child from traffic or alcohol: those are visible threats with known dynamics. Here the threat is an invisible architecture of algorithmic persuasion operating directly on the neurological reward circuits. It is like asking a parent in 1945 to shield their child from radiation without knowing what radiation is.
The World Is Already Reacting #
The world — slowly, far too slowly, but undeniably — has begun to react.
The US Surgeon General, Vivek Murthy, compared social media addiction to cigarettes and called for the platforms to carry a health warning. In 2023 he cautioned that children spending more than three hours a day on social media double their risk of mental health problems.
In December 2025, Australia became the first country in the world to ban social media access for under-16s. The law requires platforms to take "reasonable steps" to prevent minors from creating or maintaining accounts, with fines of up to AUD 49.5 million for non-compliant companies. To date, over 4.7 million accounts have been deactivated, removed or restricted.
France, Norway, Denmark, Malaysia, Spain, Indonesia and Italy itself are considering or implementing similar measures. It is a global movement that, tellingly, is encountering resistance exactly where one would expect: from the platforms themselves (Reddit has filed a challenge in the Australian High Court) and from libertarian groups denouncing the infringement of minors' freedom of expression.
It is the same dynamic we saw with tobacco, with asbestos, with lead in petrol: the industry that causes the harm fights regulation by invoking individual liberty and progress, while the harm accumulates silently in the bodies and brains of the victims.
The Question We Must Ask Ourselves #
If progress is the expansion of our collective human capacity to live freely, with dignity and sustainably, then are social media — in their current form — progress?
The answer, it seems to me, is no. Not because the technology of digital connection is inherently bad, but because the wayit has been implemented — engagement algorithms that exploit neurological vulnerabilities, business models built on addiction, a total absence of accountability for the damage caused — represents the antithesis of progress. It is the deployment of the most advanced neuroscientific knowledge to reduce human capacity rather than expand it.
If we could see the damage — if every thinning of the prefrontal cortex made a noise, if every episode of adolescent self-harm were a visible explosion — we would have reacted with the same urgency with which we react to a bomb. But the damage is silent. And silence is the greatest ally of those who cause and monetise it.
The thought experiment of the thought that kills, on reflection, is not so hypothetical after all. We have already given corporations the power to alter the brain development of billions of young human beings. The fact that they do so to sell advertising rather than out of malice does not make the damage less real. It only makes it harder to name.
Europe and the Humanist Choice #
This is where we must talk about Europe. Not the Europe of grumbling — the Europe of triplicate forms and regulations on the curvature of bananas. But Europe as a philosophical project.
The European Regulatory Framework as a Choice of Civilisation #
In recent years, the European Union has produced a body of technology regulation without parallel anywhere in the world: the GDPR for personal data protection, the Digital Services Act and Digital Markets Act for platform regulation, the AI Act for artificial intelligence, the Cyber Resilience Act for the security of digital products, and the updated Product Liability Directive extending to software and AI.
Seen from Silicon Valley, this is "bureaucracy stifling innovation." Seen from the perspective of moral philosophy, it is something profoundly different: the most ambitious attempt in history to apply the principles of European humanism to the technological revolution.
Take the AI Act. Its structure — risk-based, with outright prohibitions on the most dangerous applications (social scoring, subliminal manipulation, mass biometric surveillance) and escalating requirements for high-risk ones — is a legislative translation of Jonas's principle. It does not say "do not innovate": it says "innovate, but not at the expense of human dignity." It is not a brake: it is a rudder.
The extension of the Product Liability Directive to software is another example. For the first time, software producers will be liable for damage caused by their products — exactly as manufacturers of cars, medicines and household appliances already are. For those who have internalised the notion that software is an ethereal, immaterial entity exempt from the rules of the physical world, this is scandalous. For those who believe that power must be accompanied by responsibility, it is a civilisational advance.
The Digital Services Act and Digital Markets Act, moreover, are a direct response to the neurological catastrophe we have described. The DSA imposes transparency obligations on recommendation algorithms, bans the profiling of minors for advertising purposes, and gives users the option to switch off personalised recommendation systems. The DMA aims to break the monopolistic power of the major platforms — the "gatekeepers" who control digital access for billions of people. Are these imperfect instruments? Certainly. But they are the only instruments a democracy has at its disposal to respond to a power that, left unchecked, is quite literally rewiring the brains of its youngest citizens.
Accessibility as a Paradigm of Genuine Progress #
The European Accessibility Act deserves special mention, because it perfectly embodies the difference between technical progress and human progress.
The EAA requires digital products and services to be accessible to people with disabilities. For a tech company focused exclusively on speed and efficiency, this is a cost, a constraint, a "drag on progress." But let us pause for a moment: is an innovation that excludes a portion of humanity really progress?
If we define progress as the expansion of human possibilities — not the possibilities of some humans, but of humanity as a whole — then accessibility is not a constraint on progress. It is progress. A digital product that 15% of the world's population cannot use is not innovative: it is incomplete.
Martha Nussbaum, in her capabilities approach (developed with Amartya Sen), formulated a theory of progress that points in precisely this direction: progress is measured not by GDP, not by the number of patents, not by processor speed, but by the effective capability of individuals to live a life worthy of being lived. If a technology increases this capability for everyone, it is progress. If it increases it for some at the expense of others, it is power — not progress.
Confusing Speed with Direction #
There is a category error that pervades the entire contemporary discourse on innovation: the confusion between speed and direction.
"Move fast and break things" — Facebook's original motto — is the epitome of this confusion. Moving fast is only a virtue if the direction is right. Otherwise, it is simply running towards a cliff with greater efficiency.
When Sam Altman says that "regulation risks slowing the development of AI," the question nobody asks is: slowing it towards where? If the direction is the concentration of power in the hands of a few companies, the reinforcement of algorithmic bias, ubiquitous surveillance masquerading as personalisation — then slowing down is not a harm. It is wisdom.
Epicurus — the most materialist and least mystical of the ancient philosophers — taught that the purpose of life is ataraxia: tranquillity of mind, freedom from disturbance. Not the accumulation of power, not the conquest of nature, not speed. Tranquillity. A lesson the tech world appears to have forgotten entirely: not everything that is possible is desirable. Not everything that is fast is good. Not everything that is new is better.
Simone de Beauvoir, in The Ethics of Ambiguity (1947), offers another vital tool: freedom is never absolute, but always situated. We are free only in the context of relationships with other free beings. My freedom has meaning only if I recognise and preserve the freedom of others. Translated to the technological context: my freedom to innovate has meaning only if it does not destroy the freedom — the safety, the dignity, the autonomy — of those who will be affected by my innovation.
Progress as Collective Construction #
It is time to propose an alternative definition. If progress is not simply technical innovation, if it is not speed, if it is not the accumulation of capability — what is it?
I propose this: progress is the expansion of our collective human capacity to live freely, with dignity and sustainably.
Every word matters. Expansion, because progress is an enlargement, not a replacement; it does not demolish what works in pursuit of the new. Human capacity, not the capacity of machines or markets but the capacity of real people to act in the world. Collective, because if it is not for everyone it is not progress: it is privilege. Freely, in the Millian sense of freedom to think, speak and live as one wishes, within the limits of harm to others. With dignity, in the Kantian sense of every person as an end in themselves, never merely a means. Sustainably, because it does not sacrifice the future for the present, does not steal from grandchildren to give to children.
With this definition, a great deal changes. Penicillin is progress. Universal suffrage is progress. Clean drinking water for all is progress. Digital accessibility is progress. The GDPR is progress.
The atomic bomb is not progress. Mass surveillance is not progress. An algorithm that maximises engagement through addiction is not progress. A platform that thins the prefrontal cortex of millions of adolescents to sell advertising is not progress. And the thought that kills — our opening hypothesis — is not progress. It is the end of progress. It is the end of everything.
Humanism as Compass, Not Chain #
Let us return, in closing, to the opening question. When someone says that the state, or Europe, or the United Nations is "shackling progress," what are they really saying?
They are saying — almost always — that their way of innovating, their business model, their vision of the future is being subjected to constraints they dislike. They are confusing their own freedom of action with the freedom of humanity. They are mistaking the absence of limits for the absence of oppression.
But the absence of limits is not freedom: it is the law of the strongest. It is the Hobbesian state of nature — the bellum omnium contra omnes, the war of all against all — in which life is "solitary, poor, nasty, brutish, and short." Institutions, laws, mutual constraints are not chains: they are the foundations of coexistence. They are what makes possible not merely survival, but human flourishing.
Humanism — the real thing, the tradition that begins with Pico della Mirandola and runs through Montaigne, Hume, Voltaire, Mill, Arendt and de Beauvoir to Nussbaum — has never been against knowledge or technology. It has been against the use of knowledge and technology against the human. Humanism is the compass that enables us to distinguish genuine progress from the mere accumulation of power.
Those who build technology today bear an immense responsibility — probably the greatest any generation has ever borne. Not because technology is evil, but because technology is powerful. And power without responsibility, without limits, without the capacity to self-correct, is not progress.
It is just danger moving fast.
Postscript: A Personal Note #
I have worked in technology for twenty years. I build software. I manage infrastructure. I configure CI/CD pipelines, write specifications, plan sprints. Technology is my trade and, in many respects, my passion.
Precisely because I know it intimately — its wonders and its miseries, its power and its fragility — I refuse, with every fibre, the idea that regulating it is a hostile act. When I implement the GDPR in a project, I am not "stifling innovation": I am protecting the people for whom that project exists. When the AI Act asks me to document the risks of a high-risk system, I am not "wasting time": I am doing my job with the seriousness it deserves. When the EAA obliges me to make an interface accessible, I am not "adding costs": I am including human beings who would otherwise be shut out.
But there is another reason, more personal and more urgent, why this subject burns. I am also a father. And as a father, I live every day with the awareness that the digital world in which my child will grow up was designed by people who do the same job I do — people who know exactly what they are doing to the reward circuits of a developing brain. I know the design patterns, I know the metrics, I know the language in which "retention" and "engagement" strategies are discussed. I know that behind neutral words lie deliberately engineered mechanisms of addiction. And I know that my technical expertise is not sufficient to protect a child from an industry that has invested billions learning how to capture their attention.
This is why I believe that regulation is not merely legitimate: it is an act of civilisation. Those who build technology have a moral obligation to build it for human beings — not against them. And when they fail to do so, it is right and necessary that society — through its imperfect, slow, maddening institutions — should intervene.
Genuine progress has never been fast. It has been arduous, contentious, full of second thoughts and course corrections. It has been — to use Popper's metaphor — a process of conjectures and refutations, not a triumphal march. And the institutions that govern it — imperfect, slow, occasionally exasperating — are the price we pay for not entrusting the fate of humanity to whoever happens to be running fastest.
That is not a high price. It is a bargain.
"The degree of civilisation of a society can be measured by the amount of power it is willing to relinquish." — freely inspired by Norbert Elias, The Civilizing Process (1939)