EN IT
· 7 min read

The advisory blind spot: what an IT vendor knows that an analyst doesn’t

A few weeks ago I received an advisory report on IT services in our segment. It was solid, but it missed what only delivery-side vendors learn.

A few weeks ago I received a report from a well-known advisory firm that assessed the IT services market in our segment. I read it carefully, because my customers read this stuff before deciding who to work with. The report was well written, the market analyses were solid, the trends identified were correct. And yet, as I read it, I kept thinking: whoever wrote this has never delivered a software project to an Italian public administration customer. They’ve never negotiated an SLA with a procurement office that doesn’t know what an SLA is. They’ve never had to explain to a healthcare executive why their legacy system can’t simply “be upgraded” without rebuilding the entire integration with the regional information system.

This isn’t a criticism. It’s an observation about a structural blind spot in the IT advisory market. The people who advise companies on how to buy technology, in the vast majority of cases, have never sold technology. And the people who sell and deliver technology have no voice in how the advisory market defines sourcing best practices. The result is an information gap that costs real money to both buyers and sellers.

I say this with firsthand knowledge. I run a small ICT company that lives on contracts with mid-market and public administration customers. I’ve written quotes, negotiated contracts, defined SLAs, managed escalations, taken penalties, delivered projects late and projects early. I’ve seen the IT services market from the side of the table that sourcing analysts rarely see. And what I see doesn’t always match what advisory frameworks describe.

First blind spot: pricing. Most advisory frameworks evaluate IT vendors based on cost per person-day or cost per function point. They’re understandable, comparable metrics, and fundamentally obsolete. The problem is that AI is making the relationship between work time and delivered value completely non-linear. A five-person team using agentic coding can deliver in a week what a fifteen-person team used to deliver in a month. The value to the customer is identical. The cost in person-days is one fifth. If the customer pays by person-day, the vendor has a perverse incentive not to adopt AI, because they bill less. If the vendor adopts AI and the customer keeps evaluating by person-days, the vendor looks five times more expensive per day even if the total project cost is lower.

This isn’t a theoretical problem. It’s a problem I face every month when I present quotes. I stopped quoting by person-day more than a year ago. I quote by deliverable: this is the outcome, this is the price, these are the conditions. Some customers appreciate it. Others—especially in the public sector, where tenders are structured around person-days—don’t know how to handle it. The purchasing framework doesn’t account for a vendor that delivers more value in less time.

Second blind spot: compliance as a sourcing criterion. Advisory frameworks are starting to add regulatory compliance as a vendor evaluation criterion. But they add it as a checkbox: “Is the vendor GDPR compliant? Yes/No.” That’s radically insufficient. Compliance isn’t a binary property. It’s a spectrum. A vendor can have a privacy policy on their website and have no technical mechanism for data minimization in the code. They can claim to be AI Act compliant without having any idea what a risk assessment for high-risk AI systems is. They can tick the SBOM box without having a pipeline that generates SBOM automatically on every build.

A serious sourcing framework, in today’s European regulatory context, should evaluate compliance not as a statement but as a technical capability. Does the vendor have a documented vulnerability handling process? Does their CI/CD pipeline include dependency scanning? Are their SBOM generated automatically or compiled by hand? Does their software have automated tests that verify privacy properties? These questions are more revealing than any certification. And they’re questions that only someone with operational delivery experience knows how to ask.

Third blind spot: IT SMEs. The advisory market is built around the big system integrators. The evaluation matrices, the Magic Quadrant, the Wave, the Provider Lens—all of these tools are calibrated for companies with hundreds of millions in revenue. IT SMEs—which in Europe represent the vast majority of software services providers—are invisible in these frameworks. Not because they don’t deliver value, but because they don’t have the scale to participate in analysts’ evaluation processes.

This creates a paradox. The mid-market customer—the 200–500 employee company, the ASL, the Municipality, the Chamber of Commerce—reads analyst reports and sees only big names. But the big names don’t serve the Italian mid-market, or they serve it with junior teams overseen by a partner you never actually see. The vendor that actually delivers the work—the local SME with ten people, a senior per project, and a direct relationship with the decision-maker—doesn’t appear in any framework. The information gap is total.

Fourth blind spot: contract governance. Advisory frameworks are very good at evaluating the vendor selection phase: how to choose, how to compare, how to negotiate. They’re much less good at evaluating the execution phase: how to monitor, how to intervene when things go wrong, how to manage scope change. And in my experience, 90% of problems in IT projects don’t come from choosing the wrong vendor. They come from managing the contract poorly after the vendor has been chosen.

I’ve seen projects fail not because the vendor was incompetent, but because the customer didn’t have internal governance capable of making decisions about requirement changes. I’ve seen SLAs negotiated down to the last cent that nobody monitored. I’ve seen contractual penalties that were never applied because the customer depended too much on the vendor to be able to afford to fine them. These are recurring patterns that any IT vendor knows intimately and that advisory frameworks treat as exceptions.

Fifth blind spot, perhaps the deepest: the relationship between technical quality and business outcome. Advisory frameworks evaluate vendors on dimensions like price, track record, team size, certifications. They rarely evaluate the technical quality of the software they produce. How many automated tests does the codebase have? What’s the coverage? How is the architecture structured? Is the code maintainable? Is the documentation up to date? These are the dimensions that determine the total cost of ownership of software—the real TCO, not the declared one—and they’re dimensions that sourcing analysts aren’t equipped to assess.

The result is that the market rewards the ability to sell—convincing decks, solid references, large teams—rather than the ability to build. And that explains why so many IT projects cost twice what was expected and deliver half of what was promised: selection happens on criteria that don’t predict execution quality.

I’m writing these things not for the pleasure of criticizing a sector I respect and that, frankly, interests me. I’m writing them because I believe the IT advisory market is at an inflection point. AI is changing the economics of delivery. European compliance is changing purchasing criteria. The European mid-market needs evaluation frameworks that aren’t simplified versions of enterprise ones. And IT vendors—those who actually build and deliver the software—have operational knowledge that, if integrated into advisory frameworks, would make them far more useful.

I don’t claim to have the solution. But after fifteen years on the other side of the table, I know which questions should be asked and aren’t being asked. I know which metrics predict a project’s success and which only predict the quality of the initial presentation. I know what changes in pricing when AI enters the delivery cycle. I know what compliance means in a production codebase, not in a policy document.

This is knowledge the advisory market doesn’t have, not because it’s incompetent, but because it is structurally separated from the world of delivery. And maybe it’s time to build a bridge.