Hands and the Machine #
My grandfather was (ALSO) a carpenter. When you asked him how he could tell whether a piece of wood was good, he didn’t pull out theories. He’d stop, turn it over in his hands, smell it, and then say: “you can feel it.”
Every time I think back to that scene it makes me smile, and then I get this kind of strange nostalgia. Because I’ve been doing software for twenty years, and when a client asks me how I know whether a system is good, I wish I could answer the same way. I wish I could say “you can feel it” and leave it at that.
Except the truth is more complicated, and maybe that’s exactly the point.
You can’t touch software. You can’t smell it. You can’t hold it up to the light to look for the grain. And yet it’s everywhere. It’s the most powerful and most invisible thing we’ve built.
And when something is that invisible, trust becomes everything.
The world runs on things you can’t see #
This morning you had breakfast. The milk you bought got to the supermarket by moving through a logistics chain run by software. The payment at the checkout went through more systems than we imagine, often without anyone even noticing.
The traffic light you passed on your way out executes an algorithm. The elevator has firmware. Your salary is a number in a database.
I’m not saying it for dramatic effect. That’s just what the world is like now.
And here comes the paradox that unsettles me a bit. Almost no one, among the people who make decisions about how society works, has a deep understanding of how these systems work. Not out of stupidity or laziness. More because of a structural flaw in our culture: for years we treated technology as something for technicians, a department, a corner of the map.
In the meantime it’s become the connective tissue of everything.
When a board of directors approves an ai-based system to filter job applications, is it making a technological decision? Yes. But it’s also making an ethical, legal, social decision. It’s deciding which biases are acceptable, what margin of error is tolerable, how much opacity is admissible in a process that changes people’s lives.
And often it doesn’t know it.
What happens when no one understands the machine #
There’s a story that we know well in our industry, but that we rarely tell outside our rooms.
Inside almost every system you use, from your bank’s website to the app you order dinner with, there are small pieces of code written by people you’ll never meet. They’re open source libraries: free, shared components, often maintained by single individuals in their spare time.
Some time ago someone tried to slip a backdoor into one of these libraries, a component used by millions of servers around the world. The attempt was discovered almost by chance: an engineer noticed the system took half a second longer to start up. Half a second.
From that half second they traced it back to a sophisticated operation, probably state-sponsored, that could have compromised critical infrastructure on a global scale.
The thing that should keep you up at night isn’t that someone tried. It’s that the entire security chain depended on one person, a volunteer, who maintained that code in their spare time while fighting burnout.
It’s not an isolated anecdote. It’s a model. And I often wonder how long it can hold.
ai isn’t intelligent (and that’s fine) #
Here we need to be clear, because marketing has done an incredible job muddying the waters.
The systems we call “artificial intelligence” don’t think. They don’t understand. They don’t have intentions, desires, goals of their own. They’re statistical machines of unprecedented power, capable of finding patterns in amounts of data no human being could process, and of generating results that appear intelligent.
This distinction isn’t academic. It’s practical. It’s one of those things that changes how you make decisions.
If you think ai understands, you’ll end up treating it like an expert. You’ll trust its judgment. You’ll delegate choices to it. And when it gets things wrong, because it does get things wrong and it also does so in subtle ways, you won’t have the tools to notice.
If instead you see it for what it is, a hugely powerful tool, then everything snaps back into a healthier perspective. A tool has to be guided, checked, secured. A scalpel is extraordinary in a surgeon’s hands, and it’s a danger in anyone else’s. The difference isn’t in the scalpel.
For those of us who write software, that awareness changes the job. Maybe we don’t write every single line anymore, or at least not in the same way. But we have to understand every single line, because we’re the ones accountable for what the system does.
The machine generates. The human guarantees.
And that guarantee carries a weight that no statistical model can take on.
Europe does something brave (and almost no one notices) #
While American and Chinese big tech race ahead, Europe does something different. It writes rules.
I know, the instinctive reaction is a yawn. Rules, bureaucracy, slowness, yet another brake on innovation.
But stop for a moment. Maybe, and I do mean maybe, it’s one of the few truly political moves we’re seeing.
Europe is saying that technology is not above the law. That if you produce software that makes decisions about people, you must be able to explain how it works. That if your digital product has a vulnerability, you’re responsible for it, the way a car manufacturer is responsible for a defect in the brakes.
AI Act, Cyber Resilience Act, Product Liability Directive, European Accessibility Act. Dry names, yes. But behind them there’s a very concrete insight: in the twenty-first century, regulating technology means regulating society.
For those who run businesses, this means costs and complexity—it would be naive to deny it. But it also means something else that seems underestimated to me: a potentially enormous competitive advantage.
When the global market wakes up asking for digital products that are secure, transparent, accessible, those who have already built these qualities into the way they work will be ahead.
Compliance can be a burden, sure. But it can also be an investment, if you integrate it into the process and don’t treat it like a sheet to sign at the end.
An sbom compiled out of obligation is a file in a folder. An sbom compiled with awareness is a map of your product, a governance tool, almost a declaration of maturity.
Open source: the greatest and most fragile gift of the digital era #
There’s something deeply beautiful about open source. Someone writes a piece of code, publishes it, and tells the world: take it, use it, improve it.
It’s generosity, but not in the romantic sense. It’s the construction of a common good.
And the global digital economy rests on that common good. That’s not an exaggeration. If that software had to be rewritten from scratch, the estimated value would be on the order of trillions. But the people who maintain it receive a tiny fraction of that value.
The problem is systemic. Big platforms build billion-dollar services on libraries maintained by exhausted volunteers. Governments use open source in critical infrastructure without truly contributing to maintenance. Companies integrate open components into their proprietary products without knowing what’s inside, until a vulnerability forces them to find out in the worst possible way.
Cory Doctorow talks about enshittification, that cycle in which platforms extract value until they degrade the ecosystem. But open source isn’t a platform. It’s more like a garden.
And a garden, if everyone harvests and no one waters, dies.
The good news is that something is moving. Europe, with the CRA, is starting to distinguish between those who maintain code out of passion and those who commercialize it. Some companies are creating dedicated funds. But the most effective response remains the simplest—and also the most uncomfortable.
If you use open source in your product, contribute. With code, with funds, with recognition. Not because you have to. Because it’s smart, and because it’s right.
Accessibility: the ultimate test of our intentions #
There’s an almost foolproof way to tell whether a company takes its values seriously. Don’t look at the “about us” page. Look at whether its site works with a screen reader.
Accessibility is where rhetoric meets reality. You can talk about inclusion and diversity all you want, but if your digital product can’t be used by someone with a visual, motor, or cognitive disability, those words become empty.
And then, let’s be honest, we’re not talking about a niche.
In Europe, tens of millions of people live with some form of disability. Add to that older adults, people with temporary disabilities, anyone who finds themselves in an unfavorable context: sunlight hitting the screen, a slow connection, an old device.
Accessibility isn’t a favor. It’s a measure of the quality of our work. Accessible software is almost always better software: cleaner in the code, clearer in the interface, more robust.
The European Accessibility Act will make many things mandatory starting in 2025. But whoever waits for the law to do the right thing, in my opinion, has already missed the point.
ai developers: you’re not in danger, you’re in transformation #
Here I’m talking to people who do my same job. Those who live between commit and deploy, between bug and refactor.
I know there’s fear. I see it in conversations, in messages, sometimes in jokes. The question is always the same, even when it isn’t said: “in five years, will there still be a need for me?”
I breathe.
We went from spreadsheets to relational databases. From static sites to frameworks. From manual deploy to ci/cd. And now from hand-written code to ai-assisted code. Every time the ground shook. Every time, those who managed to adapt discovered the new ground was more fertile than the old.
The value was never in typing. It was in understanding. In the ability to look at a problem and see the structure beneath the surface. To talk with a user and translate frustrations into a system that works. To choose when to build and when to reuse. To know that a test isn’t bureaucracy, it’s love for the future.
These skills aren’t automatable. They’re amplifiable.
A machine that writes code is a multiplier of your abilities, but only if you have abilities worth multiplying. An ai IDE in the hands of someone who understands what they’re doing is an extraordinary tool. In the hands of someone who doesn’t understand, it’s a high-speed technical debt generator.
Study, yes, but not only the new tools, because they’ll change. Study the fundamentals: architecture, system design, security principles, accessibility. Study people: how they communicate, what they fear, what they hope for. And also study the regulatory context, not because it’s fun, but because it defines the playing field.
And above all, don’t stop being curious. Curiosity is a strange thing, not always “productive” in the corporate sense of the term. But it’s one of the few competitive advantages a machine can’t truly replicate.
A matter of trust #
In the end, this whole conversation—about ai, about open source, about regulation, about accessibility—revolves around a single word: trust.
Do we trust the systems we can’t see? Should we? And under what conditions?
Trust isn’t built with statements of intent. It’s built with concrete choices, repeated over time, verifiable. It’s built by documenting dependencies, testing code, making the product accessible, explaining the algorithm’s decisions, being accountable for mistakes.
For decision-makers, technology isn’t a department. It’s the language the organization is written in. You don’t have to become programmers, heaven forbid. But you do need to understand enough to ask uncomfortable questions of vendors, tell a real risk from empty reassurance, understand what’s inside the software that runs the processes.
For us developers, the job was never only technical. Every architectural choice is a choice of values. Every piece of data we decide not to collect is a right we choose to respect. Every interface we make accessible is a door we open.
Code is power, and power comes with responsibility.
So: wood and code #
My grandfather wouldn’t have understood my job. He probably would have shaken his head at certain words, and then changed the subject.
But I think he would have understood the principle.
He knew a well-made piece of furniture is recognized by the joints you can’t see. By stability that lasts over time. By the care put into details the customer will never notice, but that make the difference between an object that lasts twenty years and one that creaks after six months.
Good software is the same. You recognize it by what you don’t see: by security that isn’t breached, by accessibility that excludes no one, by privacy that isn’t betrayed, by documentation that allows whoever comes after to understand what you did and why.
It’s not a job that ai will take away from us. It’s a job that ai, in a sense, is giving back to us in its purest form: not as a mechanical act of translation, but as a human act of care.
And care, as my grandfather knew while looking at wood, you can feel it.
This article is for those who write code and for those who depend on it without knowing it. For those who decide without understanding and for those who understand without being able to decide. Maybe for all of us, because in the end the answer is always the same: it depends on us.