EN IT
· 9 min read

Who Owns the Workbench

OpenAI buys Astral, Anthropic bought Bun. The quiet colonization of the development stack has already begun, and it's not about open source.

There's a line on Hacker News that's been stuck in my head since last night, ever since I read the news about OpenAI acquiring Astral. It goes: "OpenAI and Anthropic are making plays to own the means of production in software." The means of production. It's one of those phrases that sounds a bit overblown the first time you read it. Then you sit with it. And you sit with it some more. And eventually you realize it might not be overblown enough.

Astral is the startup behind uv, Ruff, and ty—three tools that in just a few years have become load-bearing infrastructure for millions of Python developers. uv solved environment and dependency management with an elegance and speed that pip never had. Ruff made linting and formatting so fast they became invisible. ty is doing the same for type checking. They're written in Rust, all open source, all permissively licensed. And as of yesterday, all owned by OpenAI, which will fold the team into Codex, its coding agent that already counts over two million weekly active users.

Three months ago, in December, Anthropic made a similar move by acquiring Bun, the JavaScript runtime created by Jarred Sumner. Claude Code, their agentic coding tool that hit one billion dollars in annualized revenue in six months, ships as a Bun executable. Sumner put it with almost brutal clarity in his announcement post: "If Bun breaks, Claude Code breaks." So Anthropic bought the infrastructure that its billion-dollar product runs on.

Two acquisitions in three months, by the two companies fighting for dominance in AI-assisted coding. Look at them individually and each one makes sense. Look at them together and the pattern is unmistakable.

Here's what the pattern is: the companies building language models are buying, piece by piece, the toolchain those models operate on. Not the cloud. Not the chips. Not the data centers. The tools that sit between the model and the code. The package manager. The linter. The runtime. The type checker. Everything an AI agent needs to go beyond generating code and actually run it, test it, validate it, and push it back to production.

This isn't philanthropy toward open source. It's infrastructure strategy.

To understand what's happening, you have to start with something that's often underestimated: language models, on their own, generate code. But generating code is not building software. Anyone who's tried to have an agent write an entire project knows this—you end up with something that compiles but doesn't work, or works but doesn't follow the repo's conventions, or follows the conventions but breaks three tests nobody anticipated. Generation is five percent of the job. The other ninety-five percent is context: resolving dependencies, passing the linter, managing types, running tests, integrating into CI/CD, maintaining code over time as libraries update and requirements shift.

And here's the point. An AI agent that wants to handle that ninety-five percent needs to own, or at least control, the tools that compose it. It's not enough to know that uv exists. uv needs to respond to the agent's needs, be optimized for its workflows, evolve in the direction that makes the agent more effective. The same goes for Ruff, for ty, for Bun. When OpenAI writes in its announcement that the goal is to "move beyond AI that simply generates code and toward systems that can participate in the entire development workflow," that's not marketing. It's a precise description of why they bought Astral.

I see this every day in my work. I use Claude Code on Laravel projects, I manage CI/CD pipelines with GitHub Actions, and the difference between an agent that generates a file and an agent that understands the context that file has to live in is the difference between a useful tool and a toy. When the agent knows the rules of my linter, understands how my dependencies are structured, grasps my deployment flow—that's when it actually becomes a collaborator. And to be that kind of collaborator, it has to speak the language of the tools I use. If whoever builds the agent also owns those tools, the competitive advantage is enormous.

But this is exactly where things get interesting. And a little unsettling. Because what's emerging is a new kind of vendor lock-in, unlike anything we've seen before.

The old lock-in was explicit: you chose a cloud provider, you chose a proprietary database, and at some point migration became prohibitively expensive. You knew it, you did the math, you made the call. The new lock-in is different. It's implicit. You don't notice it happening because every single piece looks open source, looks free, looks neutral. uv is still MIT-licensed. Bun is still MIT-licensed. You can fork them, you can use them without Codex or Claude Code, you can do whatever you want. But the real question is different: two years from now, when eighty percent of uv's evolution is driven by Codex's needs, when the priority features are the ones that serve OpenAI's agent rather than you running uv from the terminal—will forking actually be a viable option? Simon Willison, who has one of the sharpest minds in the ecosystem, wrote yesterday that the worst-case scenario takes the shape of "fork and move on." But then he added, honestly, that OpenAI doesn't yet have a track record in maintaining acquired open source projects. And that an acquisition that starts as product-plus-talent can turn, over time, into a talent-only acquisition.

This is a point I keep coming back to. The code stays open, but the direction becomes closed. It's a form of control that doesn't violate any license, doesn't betray any explicit promise, and yet concentrates power in significant ways. Someone on Hacker News captured the dynamic well: "As they gobble up previously open software stacks, how viable is it that these stacks remain open?" Technical viability persists. Practical viability erodes.

I write this and I already hear the objections—the same ones I've made to myself. "But the incentives are aligned," many say. "Anthropic needs Bun to work well for everyone, not just Claude Code, because broad adoption creates network effects." And that's true, at least today. "The MIT license protects the community," others say. And that's true too, at least in theory. But I've seen enough acquisitions in my career to know that the promises made on announcement day come with an unwritten expiration date. Jarred Sumner's post was honest on this point: "No one can guarantee how motives, incentives, and decisions might change years down the line." And I'd add that the incentives of a startup with zero revenue and four years of runway ahead look very different from the incentives of a division inside a company that burns two and a half dollars for every dollar of revenue and needs to justify every acquisition to investors waiting for an IPO.

There's another dimension that strikes me, and it has to do with how these acquisitions are redefining competition itself. Until yesterday, the battle between OpenAI and Anthropic was fought on model quality. Who reasons better, who generates cleaner code, who has the longer context window. Today the battle is shifting to a different plane: who controls more surface area of the developer's workflow. It's no longer just "my model is better than yours." It's "my model works inside an ecosystem I own and that optimizes every step." Code generation becomes a commodity, and value migrates to orchestrating the entire cycle. Whoever owns the linter, the package manager, the runtime, and the coding agent has a structural advantage that no benchmark can capture.

The question I keep asking myself, and one that doesn't have a clear answer yet, is what all of this means for those working with stacks other than Python and JavaScript. Many teams haven't (yet) experienced this kind of concentration. But the pattern is clear, and it will expand. Today it's Python and JavaScript because those are the languages of AI and the web. Tomorrow it could be any ecosystem where coding agents need reliable tooling to operate autonomously. The race to acquire the building blocks of development infrastructure has only just begun.

An analogy comes to mind—maybe a bit strained, but it helps me think. For decades, oil was the fuel of the industrial economy, and whoever controlled the refineries and pipelines held more power than whoever extracted the crude. In software, language models are the crude. They produce output. But the real value lies in the refinery: the tools that take that output and turn it into working, tested, compliant, deployed software. The AI companies have figured this out. And they're buying the refineries.

And we're caught in the middle of this transition without having chosen it. We use uv because it's the best tool available. We use Bun because it's fast and solves real problems. Our choices are rational and individual. But the aggregate effect of millions of rational, individual choices is the concentration of infrastructural power in the hands of a few companies that didn't build those tools. They bought them.

I don't know where this leads. Maybe the incentives will stay aligned long enough to avoid real problems. Maybe the MIT license will prove to be sufficient protection. Maybe the market will remain competitive enough to prevent abuse. But maybe not. Maybe we're building a dependency we'll only recognize as such when it's too late to walk away gracefully.

What I do know is that choosing your package manager, your runtime, your linter has never been a purely technical decision. Today, whether you like it or not, it's also a political one. And like all political choices, it deserves to be made with open eyes.