EN IT
· 26 min read

AI and IT consulting: goodbye to time & materials

AI is making time & materials unsustainable in IT consulting. What’s left to sell: outcomes, accountability, and trust. Not hours.

There is a moment, in every industry, when a silent agreement between buyer and seller, an agreement so old it feels like natural law, suddenly cracks open. Not because someone was brave enough to challenge it. But because reality made it impossible to maintain.

In IT consulting, that moment is now.

The silent agreement was this: you pay us for our time, and we'll do our best with it. It went by many names. Time & Materials, Staff Augmentation, Body Rental. The underlying pact was always the same. You rented human hours. You hoped those hours would produce something useful. Sometimes they did. Sometimes they didn't. Either way, you paid.

For decades, this model worked well enough. Not because it was fair, or efficient, or good for anyone in particular. It worked because there was no alternative. Software was hard. Estimating it was harder. And nobody, not the client, not the vendor, not the developer staring at a blinking cursor at 2 AM, could reliably predict how long anything would take.

So we all agreed to pretend that time was a reasonable proxy for value. We built entire industries, entire careers, entire pricing spreadsheets on this fiction.

And then AI walked in and set the spreadsheet on fire.


I. The Comfortable Lie #

Let me be precise about what Time & Materials actually is, because the industry has done an excellent job of dressing it up in language that obscures its fundamental nature.

T&M is a risk-transfer mechanism. That's it. The vendor transfers the risk of estimation, of scope creep, of technical uncertainty, all of IT, to the client. The client pays for the attempt, regardless of the outcome. The vendor's only obligation is to show up and try.

Now, if you described any other professional relationship this way, people would laugh you out of the room. Imagine hiring a plumber who bills by the hour with no guarantee your pipes will work. Imagine a lawyer who charges for research time but won't commit to a legal strategy. Imagine a surgeon who bills per minute in the operating room, regardless of whether your appendix comes out or stays in.

And yet, in IT consulting, this was standard practice. Not just standard, but expected. Clients who pushed back were labeled "difficult." Vendors who offered fixed prices were considered either naive or reckless.

The reason was always the same: software is different. Software is complex, unpredictable, creative work. You can't estimate it like bricklaying. Requirements change. Technology evolves. The only honest pricing model is one that acknowledges this uncertainty.

This argument was not entirely wrong. Software is complex. Estimation is hard. But the conclusion, that the only honest response is to charge by the hour, was a spectacular non sequitur. The honest response would have been to get better at managing uncertainty, to build models that account for it, to develop shared-risk frameworks. Instead, the industry chose the path of least resistance: make it the client's problem.

And clients went along with it, for the most intellectually depressing reason possible: they didn't know any better. Most organizations buying IT services had no internal technical capability to evaluate what they were getting. They couldn't distinguish between a developer who spent eight hours solving a problem elegantly and one who spent eight hours creating a new problem. The only metric they had was time. So time is what they bought.

This created a market where the incentives were not just misaligned but inverted. The worse you were at your job, the more you could charge. The longer something took, the more revenue you generated. Efficiency was not rewarded; it was punished. A consultant who solved a problem in two hours instead of twenty was leaving eighteen hours of billable time on the table.

Every honest person in the industry knew this. Most of us made peace with it. Some of us told ourselves stories about "investing in quality" and "thoroughness." We weren't lying, exactly. We were just not confronting the structural absurdity of a model that could not, by design, distinguish between thoroughness and waste.


II. Enter the Machine #

In 2023, something changed. Not gradually, not subtly. Like a light switch.

Large language models (ChatGPT, Claude, Copilot, and a growing menagerie of AI coding assistants) started doing things that developers used to do. Not all things. Not perfectly. But a non-trivial portion of the daily work of software development suddenly became automatable. Boilerplate code. API integrations. Test generation. Data transformations. Documentation. Migration scaffolding. The repetitive, mechanical, billable parts of the job.

The numbers varied depending on who you asked and what they were measuring, but the direction was unmistakable. Tasks that used to take hours now took minutes. Work that required a senior developer could be roughed out by a junior with AI assistance. Entire categories of effort, the kind that padded project timelines and kept T&M invoices healthy, started evaporating.

Now, if you were a consultant billing by the hour, this created an existential problem. Not a strategic challenge, not a market shift. An existential problem. Because the value you were selling was, at its core, the time it took a skilled human to do something. And suddenly, that time was collapsing.

Let me make this concrete. Say you're a consulting firm, and a client asks you to build an integration between their CRM and their ERP. In 2022, your senior developer would scope it at 80 hours. In 2024, with AI-assisted development, the same work takes 25 hours. The quality is the same. The outcome is the same. The client gets the same integration.

What do you charge?

If you bill 25 hours, you've just lost 69% of your revenue on that project. If you bill 80 hours anyway, padding time sheets, inventing complexity, stretching the work, you're committing fraud, or something close enough to fraud that the distinction is academic.

This is not a hypothetical dilemma. This is happening right now, in every consulting firm that hasn't yet figured out how to escape the T&M trap. And the uncomfortable truth is that most of them haven't. They're doing what humans always do when the ground shifts under their feet: pretending it isn't shifting.

Some firms are hiding AI usage from clients, billing full human hours for AI-assisted work. Some are explicitly banning AI tools internally, not for quality reasons, but because their entire business model depends on human inefficiency. Some are using AI internally while maintaining T&M pricing, pocketing the productivity gains without passing them to clients.

None of these strategies are sustainable. All of them are, in various degrees, dishonest. And all of them will fail, because the arbitrage between what AI can do and what humans used to do is growing too fast to conceal.


III. What Actually Has Value Now? #

Here is the question that keeps thoughtful consultants awake at night: if AI can generate the code, write the tests, draft the documentation, and scaffold the architecture, what exactly is left for us to sell?

The answer is everything that matters.

The great irony of the AI revolution in IT consulting is that it didn't destroy value. It revealed it. For decades, the real value of good consulting was buried under layers of mechanical work. The strategic thinking, the domain expertise, the architectural judgment, the ability to translate business needs into technical decisions: all of this was mixed in with the grunt work and charged at the same hourly rate. You couldn't separate the wisdom from the typing.

Now you can. Or rather, you have to.

What clients actually need from IT consultants has never been code. It has never been hours. It has always been (though the industry was remarkably good at obscuring this) outcomes. A working product. A solved problem. A risk mitigated. A capability unlocked.

When a manufacturing company hires an IT firm to build a predictive maintenance system, they don't care how many hours it takes. They care whether it works. They care whether it reduces downtime. They care whether the investment pays for itself. The hours are an input, not an output. They were always an input. But T&M trained everyone, buyers and sellers alike, to confuse the two.

The things that actually create value in IT consulting are precisely the things that AI can't easily replicate:

Understanding the problem. Not the technical requirements, which are usually symptoms. The actual business problem. The organizational dynamics. The political constraints. The unspoken assumptions. This requires human judgment, empathy, and often the willingness to tell a client something they don't want to hear.

Making irreversible decisions well. Architecture. Technology selection. Build-vs-buy. Data modeling. Security posture. These are decisions that compound over time, that are expensive to reverse, and that require the kind of experience that comes from having made the wrong choice before and lived with the consequences.

Taking responsibility. This is the big one, and I'll come back to it. AI can generate options. It can analyze tradeoffs. But it cannot take responsibility for a decision. It cannot stake its reputation on a recommendation. It cannot be held accountable when something goes wrong at 3 AM on a Friday. This is not a limitation of the technology. It is a feature of being human.

Navigating complexity. Not technical complexity, but organizational complexity. Compliance requirements, regulatory constraints, security obligations, accessibility mandates, data governance. The kind of complexity that requires understanding not just what to build, but what you're allowed to build, what you're required to build, and what you'll be liable for if you build it wrong.

Building trust. In an era where AI can generate plausible-looking anything (code, documentation, security reports, compliance matrices) the question shifts from "can you produce this?" to "can I trust what you produced?" Trust is a human commodity. It requires track record, accountability, transparency, and skin in the game.


IV. The Value-Based Revolution (That Everyone Talks About and Nobody Does) #

Value-based pricing is not a new concept. Management consultants have been talking about it for thirty years. McKinsey doesn't charge by the hour. Neither does a good surgeon. Neither does an architect who designs a building that will stand for a century.

But in IT consulting, value-based pricing has always been the thing you admired from a distance, like a beautiful car you couldn't afford. The industry had a hundred reasons why it wouldn't work:

"We can't estimate accurately enough."
"Scope always changes."
"Clients won't pay upfront."
"It's too risky for us."

These objections were all valid, and they were all excuses. They were valid because T&M had created an ecosystem that made them true. When you've spent decades avoiding estimation risk, you never develop the muscles to manage it. When you've never committed to an outcome, you don't learn how to deliver one reliably. The model was self-reinforcing: T&M created the conditions that made T&M seem necessary.

AI has broken this cycle. Not by solving estimation (estimation is still hard) but by making the alternative untenable. When the time component of your value proposition collapses, you don't have the luxury of comfortable excuses anymore. You have to figure out value-based delivery, or you die.

Here's what value-based IT consulting actually looks like in practice:

You sell outcomes, not effort. The deliverable is not "200 developer-hours." It's "a working predictive maintenance system that reduces unplanned downtime by 15%." The client doesn't care whether it takes you 200 hours or 20. They care whether it works.

You absorb risk, not transfer it. This is the fundamental inversion. In T&M, the client bears all the risk. In value-based models, the vendor bears some or all of it. If the system doesn't work, the vendor doesn't get paid, or gets paid less. This requires confidence in your ability to deliver, which means you have to actually be good at what you do, not just good at billing for it.

You price based on the value created, not the cost incurred. If a system saves a client $2 million per year, a fee of $300,000 is reasonable even if it only took you a month to build. This makes some people uncomfortable. It shouldn't. The alternative, charging $50,000 because it only took 500 hours, rewards inefficiency and penalizes expertise.

You invest in capabilities, not headcount. Under T&M, revenue scales with the number of billable humans. Under value-based models, revenue scales with the ability to solve increasingly complex problems. This means investing in AI tools, methodologies, domain knowledge, and process, not just hiring more bodies.

You build long-term relationships, not projects. When you sell outcomes, the relationship doesn't end at deployment. You're accountable for the ongoing performance of what you delivered. This creates recurring revenue, deeper client relationships, and a natural moat against competitors who are still selling hours.


V. The Responsibility Problem (Or: Why Most Consultancies Will Fail) #

I've talked to dozens of IT consultants in the past year about the shift to value-based models. Almost all of them agree it's the right direction. Almost none of them are doing it.

The reason is not strategic complexity or market conditions. The reason is fear. Specifically, the fear of responsibility.

T&M is comfortable because it is fundamentally irresponsible, in the literal sense. The vendor is not responsible for outcomes. They are responsible for showing up and trying. If the project fails, if the software doesn't work, if the client's business suffers... well, the vendor did their best. They logged their hours. They followed the requirements (that the client wrote). The failure, structurally, belongs to the client.

Value-based models obliterate this safety net. When you sell an outcome, you own that outcome. When it doesn't work, it's your problem. When the client isn't happy, you can't point to a timesheet and say "but we put in the hours." You are, in the most uncomfortable possible way, accountable.

Most consultancies, if they're honest with themselves, are not equipped for this. Not because they lack technical talent (many of them have excellent people) but because their entire organizational structure, culture, and incentive system is built around T&M. Their project managers track hours, not outcomes. Their quality processes measure effort, not results. Their commercial teams sell rates, not capabilities.

Transforming a T&M consultancy into a value-based one is not a pricing change. It's an identity change. It requires:

  • Ruthless honesty about what you're actually good at. T&M lets you fake it: take on projects outside your competence and figure it out on the client's dime. Value-based models don't. If you commit to an outcome, you'd better be damn sure you can deliver it.

  • Genuine investment in methodology. You need estimation frameworks that work, delivery processes that are predictable, and quality assurance that catches problems before they reach the client. T&M firms often neglect these because they don't need them. If the project runs over, they just bill more hours.

  • A different kind of leadership. T&M leaders manage utilization rates and headcount. Value-based leaders manage client outcomes and delivery risk. These require fundamentally different skills, temperaments, and metrics.

  • Skin in the game. This is the hardest part. Value-based models require you to put your money where your mouth is. To accept that if you fail, you lose. To operate with the kind of commercial courage that T&M was specifically designed to avoid.

Most firms will not make this transition. Not because they can't, but because they won't. The comfort of T&M is a powerful narcotic. Many consultancies would rather die slowly on hourly billing than take the risk of committing to results.

And many of them will die. Slowly, then quickly.


VI. A Letter to the Buyer #

So far, I've been hard on vendors. Fairly, I think. But the T&M problem is not just a supply-side problem. Buyers are complicit. And if AI is going to improve IT consulting, buyers need to change too.

Dear CTO / CIO / VP of Technology / Procurement Director / Whoever Signs the Checks:

You need to hear something uncomfortable. You helped create this mess.

For years, you've been buying IT consulting like you buy office supplies, by unit cost. You compared vendor proposals by looking at daily rates. You chose the cheapest option. You measured success by hours consumed versus hours budgeted. You treated software development like an assembly line, where more hours in meant more product out.

And when projects failed, as they so often did, you blamed the vendor for poor execution, never examining whether you had created the conditions for failure by buying the cheapest time at the lowest rate.

You rewarded the wrong things. You incentivized volume over value, speed over quality, compliance over courage. And then you complained that vendors never challenged your thinking, never pushed back on bad requirements, never told you that your project was doomed from the start.

They didn't tell you because you were paying them by the hour. A consultant who tells you "this project shouldn't exist" is a consultant who just talked themselves out of six months of billing. T&M punishes honesty. You built the system. You get the behavior the system incentivizes.

Here's how to buy IT consulting in the post-AI era:

Stop buying hours. Start buying outcomes. Define what success looks like, in business terms, not technical terms, and ask vendors to commit to it. Yes, it will cost more upfront. No, it won't cost more in total. You've been paying for failed projects for years; you just spread the cost across enough timesheets that it didn't look like failure.

Pay more for better vendors. The cheapest vendor is never the cheapest. This has always been true, but AI makes it even more true. A vendor who uses AI effectively and commits to outcomes will charge a higher fixed price than a vendor who bills junior developers at $40/hour. The first vendor will cost you less. The math is not complicated.

Stop writing detailed technical requirements. I know this sounds heretical. But detailed technical requirements are the client's way of pretending they can manage risk by specifying everything upfront. They can't. Requirements are always wrong, always incomplete, always outdated by the time the project starts. Instead, describe the problem. Describe the desired outcome. Describe the constraints. And then let the professionals figure out the solution. That's what you're paying them for.

Demand transparency, not timesheets. You don't need to know how many hours someone worked. You need to know whether the project is on track, whether the risks are manageable, and whether the outcome is still achievable. Ask for those things. Build relationships with vendors who will tell you the truth, even when the truth is uncomfortable.

Accept that good work costs money. The race to the bottom on consulting rates has produced exactly what you'd expect: an industry full of mediocre work by underpaid, unmotivated people managed by firms whose primary competence is bench management. If you want excellent work, pay for it. If you want accountability, pay for that too. Value-based pricing means you're paying for the vendor's confidence in their ability to deliver. That confidence is worth something.

Build internal capability. This is the one that hurts the most, because it means spending money on your own team instead of outsourcing everything. But in the AI era, the organizations that thrive will be the ones that can evaluate what they're getting. You don't need to build everything in-house. But you need enough internal expertise to be an intelligent buyer, to evaluate proposals, challenge architectures, and recognize quality when you see it.


VII. The Trust Economy #

Let me zoom out for a moment and talk about what's really happening here. Not just in IT consulting, but in the economy at large.

AI is creating a world where production is cheap and verification is expensive. Any consultant can now produce a working prototype in hours. Any developer can generate thousands of lines of code in minutes. Any firm can create polished documentation, architecture diagrams, and compliance reports at the push of a button.

The question is no longer "can you produce this?" The question is "should I trust what you produced?"

This is a profound shift. For most of industrial history, production was the bottleneck. If you could make something, you had value. The ability to produce was scarce, and scarcity creates value. AI is making production abundant. And when production is abundant, the bottleneck moves elsewhere.

It moves to trust.

Can I trust that this code is secure? Can I trust that this architecture will scale? Can I trust that this system complies with regulations? Can I trust that the vendor will support this when something goes wrong? Can I trust that the AI-generated test suite actually covers the edge cases that matter?

Trust is not automatable. It is built through demonstrated competence, through transparency, through accountability, through the slow accumulation of evidence that someone knows what they're doing and will stand behind their work. It requires reputation, track record, and, critically, the willingness to be wrong and make it right.

In the trust economy, the winning IT consultancy is not the one with the most developers. It's not the one with the lowest rates. It's not even the one with the best AI tools (though those help). It's the one that clients believe. The one that has earned the right to say "trust us, this is the right approach" and be believed.

This is why the shift from T&M to value-based pricing is not just a commercial strategy. It is a trust signal. When a vendor says "we'll charge you $200,000 for this outcome, and if we don't deliver, we'll make it right," that vendor is putting their money where their competence is. They are saying, in the most credible way possible: we are confident enough in our abilities to accept the risk.

T&M sends the opposite signal. It says: we're not confident enough to commit to an outcome, so we'll charge you for the attempt. In a world awash with AI-generated output, where anyone can produce something, the vendor who won't commit to the quality of their output is telling you everything you need to know about how much they trust their own work.


VIII. What This Means for Developers #

I want to address the engineers directly, because they're the ones most anxious about all of this, and they're also the ones best positioned to thrive.

If you're a developer who has built your identity around writing code, I understand the discomfort. Code was your craft. You were proud of elegant solutions, clean architectures, well-tested systems. And now an AI can produce something similar in seconds.

But here's what you need to understand: your value was never the code. Your value was the judgment that produced the code. The decision about what to build. The understanding of why this approach and not that one. The knowledge of what will break at scale, what will be impossible to maintain, what will create security vulnerabilities, what will violate compliance requirements.

AI is a lever. It amplifies capability. If you're a developer with deep understanding, AI makes you devastatingly effective. You can now execute at ten times the speed, which means your judgment, the thing that was always your real value, is applied ten times more often. You're not less valuable. You're more valuable, if you invest in the things that matter.

What matters now:

Domain expertise. Understanding the business problem, the regulatory environment, the operational constraints. A developer who understands healthcare compliance and can build systems that satisfy it is worth ten developers who can write code but can't navigate HIPAA.

Architectural thinking. The ability to make system-level decisions that compound over time. AI can generate components. It cannot design systems. Not yet, and not well. The developer who can look at a business requirement and design an architecture that will serve it for five years, considering scalability, maintainability, security, compliance, and cost: that developer is irreplaceable.

Quality judgment. AI generates plausible output. But plausible is not correct. The ability to evaluate AI-generated code, to spot the subtle bugs, the security holes, the performance antipatterns: this is a new and critical skill. It requires deep knowledge, not just of coding, but of systems, of failure modes, of edge cases.

Communication. The ability to explain technical decisions to non-technical stakeholders. To translate between business language and engineering language. To write specifications that are precise enough for AI tools and clear enough for humans. This was always valuable. It's now essential.

Specification-driven development. This is where the industry is heading. The developer's primary artifact is no longer code but the specification. The detailed, precise description of what needs to be built, how it should behave, what constraints it must satisfy. AI turns specifications into code. The developer's job is to write specifications that produce the right code. This is a fundamentally different skill from writing code directly, and it rewards the same things that good engineering has always rewarded: clarity of thought, precision of expression, and deep understanding of the problem.

If you invest in these things, the AI era is not a threat. It's a liberation. You can stop spending 70% of your time on boilerplate and start spending it on the work that actually matters.


IX. The Compliance Accelerator #

There's a dimension to this that most commentators miss, because most commentators don't think about regulatory compliance until it bites them.

In Europe, and increasingly everywhere else, software is becoming a regulated product. The AI Act, the Cyber Resilience Act, the Product Liability Directive, the European Accessibility Act, NIS2, GDPR. The regulatory landscape is not just growing, it's compounding. Each new regulation adds requirements that interact with existing regulations in complex ways.

This changes the IT consulting equation in two critical ways.

First, it makes value-based pricing easier to justify. When the cost of non-compliance is measured in millions of euros in fines, the value of compliant software becomes quantifiable. A consultancy that can demonstrate compliance, with SBOMs, with security audits, with accessibility testing, with AI governance frameworks, can price based on the risk they're mitigating, not the hours they're spending.

Second, it makes T&M actively dangerous. Under the Product Liability Directive, software is a product. If it harms someone, someone is liable. Under T&M, who is liable? The vendor provided hours. The client accepted the deliverable. The responsibility is diffused to the point of evaporation. Under value-based models, the vendor commits to an outcome that includes compliance. If it's not compliant, it's the vendor's problem to fix. This is not just better pricing. It's better accountability.

Compliance is not a cost center. It's a trust signal. In a market flooded with AI-generated software of uncertain quality, the firm that can prove its output meets regulatory requirements has a massive competitive advantage. Not because compliance is exciting (it isn't) but because it's verifiable. In a trust economy, verifiable quality is the scarcest commodity there is.


X. The Uncomfortable Predictions #

Let me end with some predictions. They're uncomfortable. I believe they're correct.

Within five years, T&M will be a niche model. It won't disappear entirely. There will always be genuine exploration and research work where time-based billing makes sense. But for mainstream software development and IT consulting, T&M will become what it always should have been: an exception, not the default. Clients will demand outcome-based contracts. Vendors who can't provide them will lose business to those who can.

The IT consulting industry will consolidate dramatically. The AI productivity gains create enormous economies of scale. A small firm with excellent methodology, deep domain knowledge, and effective AI tools can deliver what used to require a large team. This favors small, specialized, high-competence firms over large, generalist body shops. The middle will hollow out. You'll have boutique experts and commodity platforms, with very little in between.

Developers who resist the shift will struggle. Not because their skills are worthless (coding knowledge is foundational) but because the market will increasingly reward the application of that knowledge, judgment, architecture, specification, over the mechanical expression of it. Developers who cling to coding as an identity rather than a tool will find themselves competing with AI on AI's terms. They will lose.

Clients who continue buying hours will get worse outcomes. As the best vendors shift to value-based models, the T&M market will be left with the vendors who can't or won't commit to outcomes. These are, almost by definition, the worst vendors. Buying hours will become a signal of unsophistication, a way of telling the market that you don't know how to evaluate quality, so you're settling for the only metric you understand.

Trust will become the primary differentiator. Not technology, not price, not speed. Trust. The ability to say "we will deliver this, and we stand behind it" and have that statement be credible. Every other competitive advantage, AI tools, methodology, domain expertise, will be in service of this one thing.


Epilogue: A Confession #

I work in a small IT company. We build software for healthcare organizations, public administration, finance, manufacturing. We have, for most of our existence, billed by the hour. T&M. Staff augmentation. The works.

I'm not writing this from a position of superiority. I'm writing this from a position of reckoning.

When AI started accelerating our development, we had the same uncomfortable conversations every honest consultancy is having. Our developers were suddenly three, four, five times more productive. Our project timelines were collapsing. And our billing model, the model that paid our salaries and our rent, was predicated on those timelines being long.

We could have hidden the AI usage. We could have padded timesheets. We could have maintained the fiction. Some of our competitors are doing exactly that. We chose not to.

Not because we're morally superior (we're not). But because we could see where it was heading. The arbitrage is too large and growing too fast. Clients will figure it out. Some already have. And when they do, the firms that were honest about the shift will be the ones they trust. The ones that hid it will be the ones they never work with again.

So we're making the transition. Painfully, imperfectly, but deliberately. We're restructuring our proposals around outcomes. We're investing in compliance capabilities that justify value-based pricing. We're training our team to think in specifications rather than code. We're building the processes and methodologies that allow us to commit to outcomes with confidence.

It's terrifying. It's also the most honest thing we've ever done.

Because the truth is, T&M was never honest. It was never fair. It was just familiar. And in an industry that claims to be about innovation, clinging to a broken model because it's familiar is the most inexcusable thing of all.

AI didn't kill Time & Materials.

AI just made it impossible to pretend it was ever alive.