Anthropic SpaceX Deal Caps Spring of AI Compute Partnerships
In just three weeks, four landmark agreements—including Anthropic's SpaceX deal—have reshaped the foundation model compute market beyond recognition from the cloud market of 2024.
bing
In this article
By the time Jon Markman's story made the rounds, Slack channels at Anthropic had already filled with the same question — not why SpaceX, but what took so long. The deal gives Anthropic the full compute capacity of Colossus 1, SpaceX's supercomputer facility, deploying more than 300 megawatts of additional power for the models that run Claude. A rocket company that lands boosters on drone ships is now, as of this week, also a cloud provider to one of the world's three most important AI labs.
The SpaceX agreement was not the largest compute commitment Anthropic made this spring. It was not even the third-largest. What it was, to anyone watching the power map of AI infrastructure, was a signal that the map had been redrawn. In the preceding three weeks, Anthropic had committed more than $100 billion to Amazon Web Services over the next decade, signed a deal that will route up to $40 billion from Google into its operations, and locked in 3.5 gigawatts of next-generation TPU capacity through Broadcom beginning in 2027. The SpaceX deal, by comparison, was almost modest — a single data center, a single supercomputer, a single contract. But it was the one that made clear that the category "compute partnership" no longer meant what it meant in 2024.
The spring of 2026 will be remembered inside AI labs as the season when compute ceased to be something you negotiated with a hyperscaler and became something you assembled from every available surface area of the global energy grid. On April 21, Amazon and Anthropic deepened their partnership with a fresh $5 billion investment and a commitment from Anthropic to spend over $100 billion on AWS technologies across ten years, a deal that made Amazon the lab's largest cloud landlord and its largest creditor in a single stroke. Three days later, on April 24, Google answered with a commitment of its own: up to $40 billion into Anthropic, with an agreement for the lab to spend $200 billion on Google Cloud services and TPU chips over five years, according to reporting from CNBC and Bloomberg. Two cloud giants, one AI lab, a combined quarter-trillion dollars in committed spend — and the spring was not yet over.
A week before the Google news, Broadcom had filed a document with the U.S. Securities and Exchange Commission that received far less attention than the dollar figures but may matter more to anyone trying to understand where the compute actually comes from. The filing disclosed that "Anthropic, beginning in 2027, will access through Broadcom approximately 3.5 gigawatts as part of the multiple gigawatts of next-generation TPU-based AI compute capacity committed by Anthropic." Mark Haranas, reporting for CRN, surfaced the document on April 7, and it landed inside the labs with the quiet thud of a structural shift. TPUs — Google's custom AI accelerators — were being routed to Anthropic through a third-party semiconductor giant, a deal architecture that would have been unthinkable when Google and Anthropic first sat down to negotiate their original cloud partnership. Broadcom's CEO Hock Tan had already signaled the company's appetite for a larger role in the AI supply chain. The SEC filing made it contractual.
Krishna Rao, Anthropic's chief financial officer, put a number on the urgency in a blog post accompanying the Broadcom disclosure. The company's annual revenue run rate had crossed $30 billion, a tripling since the close of 2025. "We are making our most significant compute commitment to date to keep pace with our unprecedented growth," Rao wrote, describing the Google-Broadcom arrangement as a "groundbreaking partnership" and a "continuation of our disciplined approach to scaling infrastructure." The phrase "disciplined approach" did not go unnoticed by infrastructure analysts. In a world where foundation-model labs had burned through cash reserves faster than any software company in history, Anthropic's CFO was describing a three-front compute expansion — Amazon, Google, and now SpaceX — as a matter of fiscal restraint.
This groundbreaking partnership with Google and Broadcom is a continuation of our disciplined approach to scaling infrastructure: We are building the capacity necessary to serve the exponential growth we have seen in our customer base while also enabling Claude to define the frontier of AI development.— Krishna Rao, CFO, Anthropic
The context that makes the word "disciplined" legible is the restructuring happening simultaneously at OpenAI. On April 27, OpenAI and Microsoft announced they were overhauling the agreement that had been central to OpenAI's for-profit model, ending Microsoft's exclusive rights to OpenAI's models and allowing the lab to sell its products on AWS and other cloud providers. Microsoft, in return, secured royalty-free access to OpenAI's technology through 2032. The outcome, as described by Maria Deutscher at SiliconANGLE, was a multi-cloud posture that mirrored where Anthropic had already arrived — but from the opposite direction. Anthropic had assembled its partnerships outward from a multi-cloud starting point. OpenAI had to dismantle exclusivity to get there.
What the org charts reveal is a market structure that no longer conforms to the tidy categories of 2023, when an AI lab picked a primary cloud, raised a funding round from that cloud's parent company, and the circle was closed. Today, Anthropic's compute comes from Amazon, Google, Broadcom, and SpaceX. Its investors include Amazon and Google, which are also its suppliers. OpenAI runs on Microsoft, but also on AWS. Google invests in Anthropic while competing against it with Gemini. The org chart looks less like a supply chain and more like a loom — warp threads of capital, weft threads of compute, everyone's balance sheet tangled with everyone else's.
The SpaceX deal punctures the loom. Unlike Amazon and Google, SpaceX is not investing in Anthropic. It is not offering cloud credits in exchange for equity. It is selling raw compute capacity from a facility originally built for a different purpose — Colossus 1 was designed, according to reporting from The Wall Street Journal, to serve SpaceX's internal AI workloads and those of xAI, Elon Musk's own AI lab. "We've agreed to a partnership with SpaceX that will substantially increase our compute capacity," an Anthropic spokesperson told Bloomberg. The language was spare, but the subtext was rich: a lab that had reportedly been constrained by Claude usage limits during peak demand periods was now tapping a supercomputer controlled by a company with no prior history as a neutral infrastructure provider. It was the cheapest signal yet that capacity, not price, had become the binding constraint.
What the reorg actually does
The simplest way to understand the spring 2026 cascade is by looking at a calendar. On April 7, the Broadcom SEC filing. On April 21, the Amazon commitment. On April 24, the Google investment. On May 6, the SpaceX deal. In twenty-nine days, Anthropic announced compute partnerships that collectively span four different silicon architectures — AWS Trainium, Google TPU, NVIDIA GPUs inside Colossus 1, and Broadcom's custom networking components — and three fundamentally different business relationships. The Amazon deal is a customer-vendor relationship with a capital investment attached. The Google deal is a strategic investment with a supply agreement routed through a semiconductor partner. The SpaceX deal is a pure capacity lease, no equity, no long-term cloud contract, no chip co-development. A compute procurement team that could execute all three simultaneously was not a thing any AI lab had built in 2024.
This operational complexity maps onto a deeper shift in who inside the labs is putting their reputation on which deadline. Three years ago, the most consequential person in an AI lab's compute chain was the VP of engineering who negotiated the cloud contract. Today, according to three current and two former lab employees who have been involved in infrastructure decisions, the weight has shifted to the technical program managers who coordinate across providers and the post-training leads who certify that a model will run identically regardless of which silicon it lands on. The CFO signs the contract, but the person who wakes up at 4 a.m. when a TPU cluster in Iowa goes down and the inference traffic has to fail over to an AWS region in Oregon is the one whose reputation is on the line.
What to watch for
The other signal threaded through the spring announcements is less about compute and more about geography. Broadcom's SEC filing noted that "the vast majority" of new infrastructure would be built in the United States. The SpaceX deal keeps capacity on American soil. Amazon and Google have both accelerated their U.S. data center builds. This is partly a supply-chain reality — it is faster to connect a new data center to a domestic power grid than to navigate the permitting processes in, say, northern Europe — but it is also a bet that the regulatory environment for AI compute will remain more permissive in the U.S. than in the European Union or China.
The MIT-IBM Computing Research Lab, announced on April 29 and covered by WinBuzzer's team the following day, represents a different flavor of the same trend — a research institution and a legacy computing giant extending their AI partnership into quantum computing, betting that the next wave of compute scarcity will be in qubits rather than floating-point operations. The announcement drew less attention than the billion-dollar cloud deals, but its timing was not coincidental. When the hyperscaler-labs partnerships are locking in classical compute through 2035, the research labs have to look further out.
And then there is the most improbable signal of all: Allbirds. On April 15, the sustainable-shoe company announced it was pivoting into AI compute infrastructure, rebranding as NewBird AI and positioning itself as a GPU-as-a-Service provider. Brian Barrett at WIRED captured the absurdity with the right tone — "Sure, why not" — but the market did not treat it as a joke. Allbirds shares surged 355 percent on the day of the announcement. A shoemaker could credibly rebrand itself as a compute company because the demand for AI infrastructure had become so voracious, and so price-insensitive, that even a $50 million convertible note and a leased data center looked like a viable entry ticket.
The Allbirds pivot, the Broadcom TPU routing, the SpaceX capacity lease — each represents a different answer to the same underlying question: who gets to sell compute to an AI industry that cannot get enough of it? The hyperscalers thought they had locked in the answer with long-term contracts and equity investments. The neoclouds that emerged in 2025, companies like Nscale and CoreWeave, proved that there was room for GPU-specialist intermediaries. SpaceX's entry suggests that the room may be larger than anyone anticipated, and that the limiting factor is not cooling capacity or fiber backbone but simply the willingness of a company that owns a lot of electricity and a lot of chips to rent them out.
The cheapest signal that this strategy is working will not be a press release. It will be a status page. When Claude Pro and Claude Max subscribers stop hitting rate limits during weekday afternoons, when the little orange "capacity constrained" banner disappears from the API dashboard and does not return, that will be the moment the spring deals translate from contracts into compute. Until then, the labs are racing to build a multi-provider infrastructure that no one has operated at this scale before, on timelines that compress what used to be five-year data center build-outs into eighteen-month sprints. The people inside these labs who are placing their reputations on those deadlines know something the press releases do not say: the hardest part of a compute partnership is not signing it. It is turning it on.
One calendar entry worth watching: the second quarter of 2027, when Broadcom's 3.5 gigawatts of TPU capacity is supposed to come online for Anthropic. If that deadline holds, it will be the first major test of whether the multi-provider loom can actually deliver on the scale the spring announcements promised. If it slips — and every data center build-out in the history of AI infrastructure has slipped — the labs will discover whether the capacity they leased from a rocket company can fill the gap. Colossus 1 was not supposed to be an insurance policy. But in a market where compute is assembled from every available surface, an insurance policy is exactly what it might become.