Nobody knows what they are doing. Not in the casual, half-joking sense people use in corridor conversations, when someone shrugs and says we are all making it up as we go. In a more precise, more unsettling, and fundamentally structural sense: the complexity of the technical objects we have built has exceeded the capacity of any single human being to understand them. Not for lack of intelligence or training. For a reason that has to do with the very nature of these objects.
For most of the history of technology, it was possible to find at least one person who understood an artefact in its entirety. The Roman aqueduct, the mechanical loom, the steam locomotive, even a Second World War aircraft: complicated systems, certainly, but ultimately reducible to a unitary body of knowledge. A sufficiently prepared engineer could trace the causal chain from one end to the other. Could say: I know how it works, I know why it works, I know what happens if it breaks. This possibility was not a detail. It was the foundation on which the very idea of technical competence rested, and alongside it the idea of responsibility.
That foundation is gone.
The threshold #
The breaking point was not sudden. It accumulated over decades, invisible in the way all changes that happen by layering rather than fracture are invisible. Every layer of abstraction added on top of the previous one solved a problem and created another: it made the layer below opaque. The programmer writing application code today does not truly know the framework they use. Whoever knows the framework does not know the runtime. Whoever knows the runtime does not know the operating system, not all the way down. Whoever knows the operating system does not know the processor microcode. And nobody, absolutely nobody, knows the entire chain of dependencies that a single install command pulls from the network and makes part of their software.
This is not polemical exaggeration. It is a factual description. An average software project in 2026 includes hundreds of direct dependencies and thousands of transitive ones. Each of these was written by someone else, with their own dependencies, their own vulnerabilities, their own architectural decisions made in a context that no downstream user will ever reconstruct. Running software today means trusting. Not in the noble sense of trust between people, but in the epistemological sense of someone who gives up verifying because verification is materially impossible.
There was an era when the word "engineer" implied the possibility of total control over one's object of work. That era is over. It will not return.
Knowledge as an illusion of scale #
The problem is not that we know little. It is that the kind of knowledge we possess is not commensurate with the complexity of what we operate. We know things, certainly. We know many things. But the knowledge needed to comprehend a modern distributed system is not the sum of individual knowledge: it is a knowledge that would require a mind capable of holding together simultaneously layers that are, by their very nature, designed to be separate.
Abstraction, the fundamental mechanism of computational thinking, works precisely because it hides. This is both its merit and its trap. A well-designed interface frees you from needing to know what happens underneath. But when something goes wrong, when a system behaves in unexpected ways, you discover that "underneath" there is a world that nobody in the room knows, and that perhaps nobody in the world knows in its entirety. Not because the knowledge does not exist, but because it is distributed in an irrecoverable way among thousands of people who have never spoken to one another.
This is a new epistemological condition. It has no precedent in the history of technology, and few precedents in the history of thought. The closest may be the problem of the division of labour as Adam Smith described it, but with a substantial difference: Smith was talking about a production process in which each worker understood their own piece. Here, nobody truly understands even their own piece, because their own piece rests on other pieces that escape comprehension by definition.
The broken chain of responsibility #
If nobody understands the system in its entirety, the question of who is responsible has no satisfying answer. Not because candidates are lacking, but because the very concept of responsibility presupposes something that is missing here: the possibility of knowing.
Responsibility, at its root, means the capacity to respond. To give an account of one's choices. But how does one give an account of a choice made in a context that could not be fully understood? How does one answer for a system whose behaviour emerges from interactions between components that no single actor designed or foresaw?
The law, which has thought about this far more than computer science has, developed over time the idea of due diligence: you do not need to understand everything, you just need to act with the care reasonably expected of someone in your role. It is a pragmatic compromise and for many centuries it worked. But it worked because the gap between what could be known and what had to be decided was bridgeable with study and experience. Today that gap is not bridgeable. It is not a gap that closes with more training or more effort: it is a structural gap, produced by the complexity of the object, not by the inadequacy of the subject.
Recent European legislation intuits this, even if it does not say so openly. The Cyber Resilience Act, the Product Liability Directive, the AI Act: all these regulatory instruments attempt to reconstruct chains of responsibility in a context where those chains are physically broken. They do so with classical tools: documentation obligations, conformity assessments, risk registers. These are serious and in many cases welcome attempts. But they rest on a silent assumption that deserves to be brought into the light: they assume that somewhere, there is someone who knows.
The manufacturer must document the risks of their product. But the manufacturer does not know the transitive dependencies of their own code. The importer must verify conformity. But conformity is defined against standards that describe system properties that nobody can verify in practice. The user must give informed consent. But the information needed for genuine consent would require competences that not even those who wrote the software possess.
This is not cynicism. It is an accurate description of a situation we have created step by step, in good faith, solving each time the problem immediately in front of us and adding each time a gram of opacity to the overall system.
The myth of specialisation #
A response one hears often is that specialisation solves the problem. Nobody needs to understand everything: it is enough that each person understands their own piece well, and the whole will work. It is the principle of the division of labour applied to knowledge, and it is enormously attractive because it allows one to avoid confronting the underlying question.
But it does not work. It does not work because the boundaries between the "pieces" are arbitrary and permeable. A security vulnerability crosses every layer. A regulatory requirement cuts across frontend, backend, infrastructure, third-party suppliers. An architectural decision made five years ago in a context that no longer exists constrains today choices that whoever made it could not have imagined.
Specialisation works when components are genuinely independent. Software is not made of independent components. It is made of dependencies, in the most literal sense: every part depends on other parts, and the behaviour of the whole is not deducible from the behaviour of the parts. It is what systems theory calls emergence, and emergence is precisely the adversary of specialisation. The most insidious bug is always the one that lives in the space between specialisations, where nobody looks because nobody thinks it is their territory.
The incompetence of those who legislate #
There is a particular case that deserves attention, not in an accusatory spirit but because it is structurally illuminating: the incompetence of those who write the rules.
A legislator who writes rules about software is in the same epistemological position as everyone else: they cannot comprehend the system in its entirety. But unlike a programmer or a software architect, the legislator does not even have the partial knowledge that comes from daily use. They operate on descriptions of descriptions, on abstractions of abstractions, mediated by consultants who in turn mediate the knowledge of specialists.
This is not an argument against regulation. On the contrary: it is an argument about the nature of possible regulation. Regulating what one does not fully understand is not a failure, it is the starting condition. The question is not whether the legislator is competent — they cannot be, in the full sense of the word, and not through any fault of their own. The question is whether the rules they produce are robust enough to function despite the structural incompetence of those who wrote them, those who must apply them, and those who must verify them.
Sometimes the answer is yes. The GDPR, with all its limitations, introduced a principle of caution that works precisely because it does not require technical understanding: treat personal data as though it were dangerous, regardless of whether you understand the specific mechanisms of the danger. It is a regulation designed for incompetence, and for that reason it works better than many regulations that presuppose competence.
Socrates in the development cycle #
There is a sentence attributed to Socrates with the frequency and imprecision reserved for all sentences that are too useful: "I know that I know nothing." It is cited as a gesture of humility, sometimes as intellectual coquetry. But in its most radical version, the one in Plato's Apology, the point is different and far more uncomfortable: Socrates does not simply say that his knowledge is limited. He says that those who believe they know are in a worse condition than those who know they do not know, because to ignorance they add the illusion.
Applied to the technological present, the Socratic thesis acquires a weight that Plato could not have imagined. The software industry is built on a double illusion of knowledge: that of those who build the systems, who believe they control them, and that of those who use them, who believe they understand them. Both illusions are functional. Without the first, nobody would write code, because complete honesty about one's own ignorance would be paralysing. Without the second, nobody would use software, because authentic informed consent would produce mass refusal.
We function, as individuals and as an industry, thanks to a voluntary suspension of epistemological disbelief. Not unlike the way finance functions, or politics, or any other complex system in which trust replaces understanding.
But there is a difference. Finance and politics have developed over time an institutional awareness of their own epistemological fragility. Central banks exist because we know the financial system does not self-regulate. Constitutions exist because we know that power does not self-limit. Computing has not yet developed anything equivalent. It has standards, it has best practices, it has certification frameworks: all tools that presuppose the possibility of knowledge rather than reckoning with its impossibility.
Deciding without knowing #
The daily condition of those who work in technology is this: deciding without knowing. Not in the dramatic sense of a blind gamble, but in the ordinary, slightly wearing sense of someone who every day makes choices with multi-year consequences based on information they know to be incomplete, in contexts they know will change, for requirements they know to be provisional.
An estimate is a declaration of subjective probability passed off as a prediction. An architectural choice is an act of faith in the stability of conditions that are not stable. A refactoring is a wager that present costs will be repaid by future benefits that nobody can guarantee. Every sprint is an exercise in applied epistemology under time constraint, conducted by people who have not studied epistemology and who are not paid to reflect on the conditions of possibility of their own knowledge, but to produce results.
This is not an indictment. It is a description. And the point is not that things should be different: it is that they cannot be different. Structural incompetence is not a problem to be solved. It is the condition in which we operate, and in which we will continue to operate for as long as we build systems whose complexity exceeds the comprehension of those who build them.
The question, then, is not how to eliminate incompetence. It is how to inhabit it.
Inhabiting incompetence #
If competence in the classical sense — complete mastery of one's domain — is no longer attainable, then what we call by that name has become something else. Not a knowing, but a knowing-how-to-act in the absence of knowing. A form of practical wisdom that resembles Aristotle's phronesis more than his episteme: not knowledge of first causes, but the capacity to act well in particular and uncertain situations.
The good technologist, today, is not the one who knows the most. It is the one who manages their own ignorance best. Who knows where the boundaries of their understanding lie, who knows how to ask, who knows when to stop, who knows how to build systems that fail predictably rather than catastrophically. These are all competences, but they are competences about one's own incompetence. They are meta-competences, if one wants to use an ugly term for an important idea.
And here a paradox opens that deserves to be looked at squarely. The professional culture of software rewards knowing and punishes not-knowing. Whoever admits they do not understand loses credibility. Whoever says "I don't know" in a meeting is perceived as weak. Whoever estimates honestly is punished by comparison with whoever estimates optimistically. The entire incentive system is built to mask incompetence rather than manage it, which produces the opposite result: unrecognised, unmanaged, unmetabolised incompetence. Dangerous incompetence.
A mature professional culture would do the reverse. It would start from the assumption that nobody understands the system in its entirety, and build processes, tools, institutions designed to function under that condition. This is not utopia: it is the engineering of ignorance, and it is exactly what we do when we write automated tests (we verify because we do not trust our own understanding), when we conduct code reviews (we look for the errors that our blind spots prevent us from seeing), when we adopt the principle of least privilege (we limit the damage our ignorance can cause).
These are not palliatives. They are the most sophisticated practices the software industry has produced, and they are all, at root, techniques for managing incompetence. Nobody calls them that, because the name would be uncomfortable. But that is what they are.
Honesty as method #
Perhaps the only available response to structural incompetence is not a solution but an attitude: epistemological honesty as a daily practice. Knowing that one does not know, not as an empty formula, but as the operational starting point of every decision.
This does not mean paralysis. It means deciding in the knowledge that one is deciding in the dark, and building safety nets proportionate to that awareness. It means documenting not only what was done, but why it was done and on what assumptions — because those assumptions will prove wrong and whoever comes after will need to understand not the solution, but the reasoning that produced it. It means ceasing to treat estimates as promises and architectures as monuments.
This is not a revolutionary proposal. It is the description of what the best people already do, often quietly, often against the grain of the culture that surrounds them. Structural incompetence is not an enemy to be defeated. It is the ground we walk on, and the only choice available is between walking on it with our eyes open or with our eyes shut.
We, as a species, have built a world we are unable to understand. This is not a tragedy. It may be the most human thing we have ever done.