A question SMEs ask under their breath #
Every now and then I end up talking with entrepreneurs who have ten, thirty, eighty people. They don’t have a Chief AI Officer, they don’t have an eighteen-month roadmap, and often they don’t even want to be told that they “have to transform.” They have another urgency, much more concrete: keeping the company running.
And yet the question comes anyway, even if it’s rarely said exactly like this: so what, concretely, are we supposed to do with AI?
Big companies are already asking it officially, with budgets, dedicated teams, and programs with acronyms that sound important. SMEs, instead, are in different territory—made of confusion and contradictory signals. And it’s not their fault. It’s that the public conversation about AI often feels designed for people who have time, resources, and structures an SME will never have.
So I’ll try to put it simply, almost brutally.
How many people on your team know how to get a better result from an AI agent than they would produce on their own?
Not how many “use it.” Not how many have ChatGPT open in a tab. How many know how to give precise instructions, critically evaluate the output, iterate methodically until something comes out that truly holds up. In sales, marketing, operations, finance, legal, product, HR.
If the honest answer is “few” or “I don’t know,” you probably have a problem. And in SMEs that problem weighs more, because you don’t have the luxury of postponing.
The most expensive misunderstanding of 2026 #
The misunderstanding, in my opinion, is this: thinking AI is a tech issue.
It isn’t anymore. Maybe it hasn’t been for at least a year.
AI has become a multiplier of individual capability, cross-functional to every role. A salesperson who can build a competitive analysis in ten minutes is playing a different game than someone who takes two days. Whoever handles administration and can generate a sensible cost dashboard in an hour is competing in a different category than someone who assembles everything in a week, fishing data out of old, half-broken excel files.
And here we’re not talking about the usual word “efficiency.” Efficiency is doing the same things faster. Here we’re talking about people who start doing things they didn’t do at all before, because the cognitive cost was too high.
A truly personalized offer for every prospect, instead of the usual recycled pdf with the name changed on the cover. A cash flow analysis of the last three years that shows you patterns no one ever pointed out. Project progress reports that didn’t exist before because “there’s no time.”
It could happen.
But only if someone knows how to govern the tool. If they treat it like a collaborator to direct, not like a toy to ignore or an oracle to interrogate at random.
“Governing” isn’t “using” #
This distinction is the heart of everything, and it strikes me how often it doesn’t get made.
Using AI is asking for something and accepting whatever comes back. It’s prompt-and-pray. You write a request, copy the answer, paste it into an email, and hope it’s fine. It’s understandable—at the beginning everyone does it. But it’s worth nothing, and in some cases it’s even dangerous.
Governing AI is something else. It’s knowing what to ask, how to ask it, and above all understanding when the answer is wrong even if it looks perfect.
It means giving context, constraints, success criteria. It means iterating for real. Not “rewrite it better,” but “rewrite it considering that our customer is a public entity, with procurement constraints and six-month decision timelines.”
Whoever governs AI has a mental model of the tool. They know what it does well, where it tends to invent, where it can become a risk. They know when to trust it and when to verify. They know when the output is a starting point and when it can be considered finished.
And no, it’s not an innate talent. It’s a skill you develop. But here comes the slightly uncomfortable part: not everyone develops it. Not because they aren’t smart, but because it takes a specific mindset, and it’s not as widespread as we like to think.
The mindset you’re ignoring in interviews #
When you hire, you usually look at experience, vertical skills, company culture, maybe leadership. All fair.
But you risk overlooking a variable that in the next two years will separate who performs from who struggles.
The variable is this: can the person in front of you work at a higher level of abstraction than execution?
Those who work at the execution level complete tasks. They write emails, prepare reports, analyze data, produce documents. They do it well, with craft. But they do it one thing at a time, and their time is the only resource.
Those who work at the specification-and-governance level define what needs to be done, with what quality, within which constraints, and then direct an agent (AI or human—at that level the distinction starts to blur) to execute it. They supervise, correct, refine. And move on.
It doesn’t mean execution doesn’t matter. It means the value of pure execution is compressing rapidly, while the capacity for governance is expanding.
If you don’t evaluate this capability during hiring, you’re hiring executors in a world that rewards directors.
Three questions that change an interview #
It doesn’t matter whether you’re hiring a marketing manager, a controller, an office manager, or a CTO. There are three questions that, in my opinion, work almost every time.
“Tell me about the last time you used an AI agent for a real deliverable” #
Not an experiment. Not a game. A finished deliverable in front of a client, a boss, a board.
Listen to how they talk about it. Did they give precise instructions or generic prompts? Did they iterate? Did they verify the output? Did they integrate their judgment or take everything as-is?
If the answer is “I’ve never done it,” in 2026, that’s a data point. Not necessarily disqualifying, but it’s a data point to weigh seriously.
“How would you know if the output is wrong?” #
This is the key question.
AI produces convincing results even when they’re completely made up. A plausible but false number. A logical argument that starts from a premise that doesn’t exist. A beautifully written text that says the opposite of what you need.
Whoever knows how to govern AI has developed antibodies. They have a method, even a rudimentary one, to distinguish reliable output from toxic output.
Whoever doesn’t have it is more dangerous than someone who doesn’t use AI at all, because they produce errors with the confidence of someone who believes they’re right.
“If you had a dedicated AI assistant for eight hours a day, how would you reorganize your work?” #
This question separates those who think in terms of tasks from those who think in terms of systems.
The mediocre answer is: “I’d do the same things faster.”
The interesting answer is: “I’d do different things,” followed by a concrete vision. What would change in priorities? Which outputs would start to exist? Which activities would be eliminated or reduced?
Whoever answers well has already understood that AI isn’t just an efficiency tool. It’s a leverage tool. And they know where to place the leverage.
The elephant in the room: the key people you already have #
The biggest problem, often, isn’t who you hire. It’s who you already have.
In an SME the org chart is short. You don’t have layers of middle management. You have three, five, eight key people holding the company up.
The head of sales who knows every customer by heart. The admin who’s kept the machine running for fifteen years. The project manager you dump everything on that you don’t know where to put.
If these people don’t touch AI because “it’s not my job” or because “I’ve always done it this way and it worked,” you have a bottleneck no new hire can compensate for.
In a big company you can route around the problem with a dedicated team. In an SME you can’t. The key people are the company. If they don’t change, nothing changes.
And here comes the truly uncomfortable part, the one that usually causes a bit of irritation.
In many SMEs, the first person who didn’t get the message sits pretty high up.
If it’s you, and you’re reading with a mix of interest and irritation, maybe it’s worth stopping for a second on that irritation. I often wonder if it isn’t one of the most useful signals we have when something concerns us more than we’d like.
The choice, in the end, is clear-cut: establish that the ability to govern AI agents is an expected skill for every role, starting with yours. Not optional. Not “nice to have.” Expected, evaluated, measured.
It’s not a generational issue #
There’s another mental shortcut that feels convenient, but doesn’t hold up: “young people are digital natives, so they understand AI.”
It’s not true.
I’ve seen twenty-somethings use AI like a slightly more brilliant search engine. And I’ve seen fifty-somethings integrate it into their workflow in ways that, honestly, I wouldn’t have imagined.
The variable isn’t age. It’s the willingness to rethink how you work. To say: “maybe the way I’ve always done this thing isn’t the best anymore.”
It takes humility, curiosity, and a pinch of discomfort. Three things that don’t have an age.
So no, it’s not enough to hire young people and hope AI enters the company by osmosis. It takes a deliberate choice about which skills you value, which you reward and—let’s say it—which you demand.
The cost of inertia, in numbers (more or less) #
These aren’t perfect calculations, but the order of magnitude is.
A knowledge worker who governs AI well often has output equivalent to 1.5x or 2x compared to someone who doesn’t use it. Not because they work more, but because they eliminate low-value work: manual research, first drafts, exploratory analysis, data structuring, slide prep.
In a team of ten people, if five govern AI and five don’t, it’s as if you had twelve or thirteen people. Without hiring anyone.
Now flip the reasoning.
If your competitor’s team is aligned on this skill and yours isn’t, your team of ten stays a team of ten. Theirs becomes a team of fifteen or twenty, at least for some activities.
And the gap widens every month, because the tools improve and those who govern them capture the incremental value. Those who don’t govern them stay still.
It’s not futurism. It’s arithmetic.
What to do Monday morning #
The good news is you don’t need task forces, consultants, or twelve-month transformation programs that cost six hundred thousand euros.
You need three practical choices, and a bit of consistency.
The first is to include AI governance capability in the evaluation criteria for every open position. Not as a technical requirement, but as a cross-functional skill, at the same level as the ability to communicate or work in a team. Ask those three questions. Really listen to the answers.
The second is to ask your direct reports how they use AI in day-to-day work. Not with an anonymous survey, but in a one-to-one conversation. You’ll discover surprising things in both directions. Those who use it well often do it quietly. Those who use it poorly sometimes don’t realize how fragile their method is.
The third is to lead by example.
If you’re an entrepreneur who has never used an AI agent to prepare an offer, analyze financial statements, or write an important communication, the message you’re sending is crystal clear: it’s not important.
And your organization will believe you.
In an SME you can’t delegate change. Either it starts with you, or it doesn’t start.
It’s not a moral #
In five years we’ll probably look at the hiring processes of 2025 the way we look at those of 2010 today, with a mix of tenderness and disbelief. “You really hired people without checking whether they knew how to work with AI?”
Really.
And whoever stops doing it earlier, without waiting for it to become “standard,” will have an advantage the others will take years to close.
Maybe the right question, in the end, isn’t whether the next person you hire knows how to use AI.
It’s whether they’ll know how to govern it. And whether you’ll be willing to demand it, even when it’s uncomfortable.