I often find myself in a scene I can recognize within the first minute. Someone mentions compliance—maybe in a project meeting or during a call with a slightly more structured client—and the response comes almost automatically.
"legal will handle it".
I’m not saying it to be cynical. It’s just that, in 2026, that sentence risks becoming one of the most expensive ones a European software company can say.
Because what’s coming doesn’t look like a policy update or a round of notices. It looks much more like a change of foundations. And, even more inconveniently, it comes with very tight deadlines.
The perfect storm: five deadlines in far too short a window #
If you build software for the European market, the calendar is no longer a detail to keep in a shared file. It’s a timeline that seeps into your architecture.
Between the end of 2024 and 2026, regulations pile up that, taken one by one, you might manage—barely. Together, though, they slot together on purpose.
NIS2 has already been in force since October 2024 and it expanded the perimeter of who needs to take cybersecurity seriously, including the supply chain. And here the first surprise often hits: you don’t have to be “critical” to feel the effects. You just have to be a supplier to someone who is.
The European Accessibility Act has already been in effect since June 2025 and, regardless of how it will be interpreted in different contexts, it moves accessibility from the world of “improvements” to the world of “requirements”.
Then 2026 arrives, which is the year the knot tightens.
In August 2026 the AI Act enters full application for high-risk systems. It’s not just a matter of ethics or transparency. We’re talking conformity assessments, technical documentation, data governance, human oversight. And yes, real penalties too, up to 3% of global turnover or 15 million.
In September 2026 the Cyber Resilience Act brings obligations that, as they’re written, you can’t “add later”. Among these is reporting actively exploited vulnerabilities within 24 hours and serious incidents within 72 hours. And the part many underestimate is this: it doesn’t apply only to future products. It also applies to what you already have on the market. Legacy included.
In December 2026 the Product Liability Directive redefines what a “product” is in the digital era. Software, even standalone or in SaaS form, becomes a product in every sense, with strict liability. Bugs stop being just an operational annoyance and become potentially a product defect, with direct legal consequences. And responsibility doesn’t end at release, it ends when you no longer have the ability to provide updates.
And in the background there’s GDPR, which never went away, with enforcement that’s getting less and less shy.
Maybe the most destabilizing thing is that these rules don’t add up linearly. They reinforce each other.
The fundamental mistake: treating compliance like an external layer #
The most common reaction is understandable: “let’s call the consultant, update the policies, fix the documentation, do the checklists”. It’s the approach we could call compliance-as-paperwork.
With GDPR, as badly as it was often done, this strategy sometimes held. You could paste a set of processes, notices, registers, roles on top of an existing system.
With the 2026 package it doesn’t hold anymore. And not because there’s a lack of goodwill. Because there are requirements that, if they aren’t supported by your architecture and by how you build and ship software, simply don’t exist.
Let’s take the most concrete case possible: to comply with the CRA, when an actively exploited vulnerability emerges, you must be able to react and report within the required timeframes. But to do that you have to know precisely what’s inside your product. Not “more or less”. Exactly.
Direct dependencies, transitive ones, components, versions, builds. Everything.
That takes you straight to an object that is not legal and not bureaucratic: the SBOM, Software Bill of Materials. And not an SBOM in PDF generated once to make someone happy. A living SBOM, regenerated on every build, in a machine-readable format, integrated into the pipeline.
If your build process doesn’t produce an SBOM as a natural output, it’s not that “you’re a bit behind on compliance”. It’s that you can’t be compliant, period.
And here, in my opinion, the most important sentence in this whole discussion becomes clear: compliance isn’t a requirement you add. It’s an emergent property of your architecture.
Compliance as an architectural constraint #
In software architecture we’re used to constraints. Latency, availability, scalability, compatibility, costs. They’re things that narrow the space of possible solutions.
Well, EU compliance 2026 is an architectural constraint. Not a ticket to assign “at the end of the project”, not a document to produce “before the tender”. A constraint that cuts across the system.
And when you look at it that way, some consequences become almost obvious.
Observability is no longer an extra #
The AI Act, for certain systems, requires logs and traceability. The CRA pushes you toward detection and response capabilities. The PLD puts you in a position where you must be able to prove the state of the software at the time of the incident.
If you don’t have sufficient audit trails, it’s not that “you monitor too little”. You’re building a system that can’t withstand the questions the market and regulations will ask you.
Dependency management becomes a critical process #
For years we normalized the idea that updating libraries is work you do “when there’s time”. A rushed npm install, a lockfile nobody looks at, an update postponed because “it works anyway”.
With CRA and NIS2, that lightness becomes supply chain risk. And supply chain risk, in 2026, doesn’t stay confined to the technical department. It propagates into contracts, tenders, partnerships.
The question you must be able to answer is very simple and very hard: “does this newly released CVE impact our products in production?”. And the answer has to come in minutes or hours, not weeks.
The concept of “finished product” falls apart #
The PLD and the CRA, together, make the idea that a release closes a chapter less credible. Shipping becomes the start of a long-term relationship with a system that must be maintained, monitored, cared for.
This also changes how we estimate, plan, sell. Because part of the value, and of the responsibility, lives after delivery.
Accessibility is a property of the design system #
The EAA isn’t kind to retrofits. If you have an accessible-by-default design system, you have a structural advantage. If you have to “make accessible” a frontend that grew for years without rules, the amount of findings you get back can become unmanageable.
And here I often wonder whether we’re underestimating how much accessibility is, in reality, a form of internal standardization. Not just an external requirement.
Human oversight isn’t a button #
One of the most common misunderstandings about the AI Act is thinking that “human oversight” means adding an approval step at the end.
In practice, it’s a flows topic: where a human can intervene, with what information, with what ability to override, and with what traceability. It’s decision-process architecture even before software architecture.
The intersection many aren’t looking at #
The most interesting point, and maybe the most dangerous, isn’t the individual obligations. It’s how they fit together.
The PLD says a software product is defective if it doesn’t meet applicable cybersecurity requirements. The CRA defines an important part of those requirements. The AI Act adds specific requirements when there’s AI.
So non-compliance with the CRA can become a product defect under the PLD.
We’re no longer talking only about an administrative fine or a non-conformity “to fix”. We’re talking about civil liability for damages, with dynamics like reversal of the burden of proof.
And to defend yourself, you must be able to prove the product was compliant at the time of the incident. Which, again, brings you back to SBOM, audit trail, logging, technical documentation. Not as compliance theater, but as defensive architecture.
The Italian paradox: fragile, but fast #
Here comes a piece that I feel very close to, because it’s the everyday reality of many companies I talk to.
The Italian software fabric is largely made of SMEs: teams of 5 to 20 people, vertical products, growth by layering, roadmaps dictated by customers, technical debt accumulated with a feature-first logic.
Many of these companies are unprepared. Not out of inability, but because of structure.
And yet there’s a paradox: the same structure that makes them vulnerable can turn into an advantage.
A 10-person company, if it truly decides to, can redesign important parts of its architecture in 12 months. A large organization often takes months just to understand what it has in production.
A small team knows its domain and its product intimately. It can do a realistic gap analysis in weeks.
And a founder-CTO who invests today in compliance-by-design can move within a window that, in 18 months, will already be closed. Not because it will be impossible, but because it will be too late to do it calmly.
Compliance-by-design: architectural decisions, not slogans #
When I say compliance-by-design I don’t mean “doing things properly” in a generic way. I mean concrete choices you can start making now, even without revolutionizing everything.
The SBOM as a first-class artifact, for example. Every build produces an SBOM in CycloneDX or SPDX format. The pipeline generates it, stores it, compares it with vulnerability databases, and if needed blocks a deploy. Tools like Syft, Grype, or Trivy make this much more accessible than it sounds.
Then there’s the audit trail, but not the one from system logs. A domain audit trail: who did what, when, why, with what role, in what context. It can be event sourcing or an append-only log, but the idea is that it’s a first-class citizen of the data model.
Technical documentation as code is another key point. If documentation lives in a manually updated wiki, sooner or later it stops representing reality. If instead you use versioned ADRs, declarative specs, and documentation generated from code, then documentation becomes an inevitable byproduct of the work.
And vulnerability management can’t be an annual event. It has to be a continuous process: automated scanning, triage, remediation with defined timelines. When the report of an actively exploited vulnerability comes in, the system must help you react within hours.
On accessibility, the thing I’ve seen truly work is treating it as design tokens and as a rule of the design system. If components are born accessible, the product tends to stay that way. If you have to fix it later, every new screen adds debt.
AI-native development as an accelerator, not just a risk #
There’s a small irony in the fact that the AI Act arrives while AI is changing the way we write software.
Many see AI-native development as a risk for compliance: more generated code, less control. I suspect that, in many cases, the opposite is true.
A spec-driven approach, where software is born from declarative, readable and verifiable specifications, is inherently more favorable to conformity. Because specs are documentation. Because they’re versioned. Because they make explicit assumptions that otherwise stay in the head of whoever wrote that piece of code two years ago.
Tools like project files such as claude.md, Speckit, pipelines that generate compliance artifacts as part of the flow, aren’t science fiction. They’re a different way of working, one that can make it easier to produce evidence, traceability, reconstructability.
Maybe the future isn’t “writing more documentation”. It’s building software so that documentation is inevitable.
The real cost of non-compliance is much closer than a fine #
Penalties are impressive, but for many SMEs they remain abstract. It’s not fear of the fine that changes priorities.
The real cost is more everyday.
It’s the public tender you can’t participate in because you don’t meet the security requirements in the specs.
It’s the enterprise customer who asks you for the SBOM and you don’t know where to start.
It’s a partner who does supply chain risk management and excludes you from the shortlist.
It’s an incident you can’t handle within the timeframes required by the CRA and that gets entangled in a tougher liability context.
For many Italian companies, these aren’t theoretical scenarios. They’re things that look a lot like 2027.
The window is now, and deep down it’s about good engineering #
As a software architect, with my head often split between roadmap, budget, technical debt, and market demands, I understand well how all this can feel overwhelming.
But there’s one aspect worth keeping front and center: many of the practices required by these regulations are simply good engineering.
Generating an SBOM is good engineering. Having an audit trail is good engineering. Managing dependency vulnerabilities is good engineering. Documenting architectural decisions is good engineering. Building accessible interfaces is good engineering.
What EU compliance 2026 is doing, perhaps, is making mandatory what we should have been doing anyway. It’s turning best practices into a baseline.
And it’s creating a market where those who treat compliance as an architectural problem, and not as a practice to delegate to legal, end up with software that’s more robust, more maintainable, and also more sellable.
The window to build that advantage is open now. In eighteen months, those who haven’t started will be chasing. Those who start now will probably find themselves on the other side.