Anthropic’s AI agent was the most prolific code contributor to Bun’s GitHub repository, submitting more merged pull requests than any human developer. Then Anthropic paid millions to acquire the human team anyway. The code was MIT-licensed; they could have forked it for free. Instead, they bought the people.
Everyone’s heard the line: “AI will write all the code; engineering as you know it is finished.”
Boards repeat it. CFOs love it. Some CTOs quietly use it to justify hiring freezes and stalled promotion paths.
The Bun acquisition blows a hole in that story.
Here’s a team whose project was open source, whose most active contributor was an AI agent, whose code Anthropic legally could have copied overnight. No negotiations. No equity. No retention packages.
Anthropic still fought competitors for the right to buy that group.
Publicly, AI companies talk like engineering is being automated away. Privately, they deploy millions of dollars to acquire engineers who already work with AI at full tilt. That contradiction is not a PR mistake. It is a signal.
The key constraint is obvious once you say it out loud. The bottleneck isn’t code production, it is judgment.
Anthropic’s own announcement barely talked about Bun’s existing codebase. It praised the team’s ability to rethink the JavaScript toolchain “from first principles”.
That’s investor-speak for: we’re paying for how these people think, what they choose not to build, which tradeoffs they make under pressure. They didn’t buy a pile of code. They bought a track record of correct calls in a complex, fast-moving domain.
AI drastically increases the volume of code you can generate. It does almost nothing to increase your supply of people who know which ten lines matter, which pull request should never ship, and which “clever” optimization will explode your latency or your reliability six months from now.
So when Anthropic’s own AI tops the contribution charts and they still decide the scarce asset is the human team, pay attention. That’s revealed preference.
Leaders don’t express their true beliefs in blog posts or conference quotes. They express them in hiring plans, acquisition targets, and compensation bands. If you want to understand what AI companies actually believe about engineering, follow the cap table, not the keynote.
So what do you do with this as a technical leader?
Stop using AI as an excuse to devalue your best knowledge workers. Use it to give them more leverage.
Treat AI as force multiplication for your highest-judgment people. The ones who can design systems, navigate ambiguity, shape strategy, and smell risk before it hits. They’ll use AI to move faster, explore more options, and harden their decisions with better data.
Double down on developing judgment, not just syntax speed: architecture, performance modeling, incident response, security thinking, operational literacy. The skills Anthropic implicitly paid for when it bought a team famous for rethinking the stack, not just writing another bundler.
Be careful about starving your junior pipeline based on “coding is over” narratives. As AI pushes routine work down, the gap between senior and everyone else widens. Companies that maintain a healthy apprenticeship ladder will own the next generation of high-judgment engineers while everyone else hunts the same shrinking senior pool at auction.
Most important: calibrate your strategy to revealed preferences, not marketing copy. When someone’s AI writes more code than their engineers but they still pay millions for the engineers, believe the transaction, not the tweet.
How did you like this article?
Enjoyed this article? Subscribe to get weekly insights on AI, technology strategy, and leadership. Completely free.