TechReaderDaily.com
TechReaderDaily
Live
AI · Economics

OpenAI's $122 billion round, by the inference math: where every dollar goes

Of $122B raised, $84B is committed to compute through 2028. We worked the unit economics with the public cloud invoices, the H200 contract pricing, and the internal projections that have leaked.

OpenAI's Latest Funding Round: Is a Few Billion Dollars Enough? www.marketingaiinstitute.com
In this article
  1. The compute commitment, by the unit cost
  2. What I do not know

$122 billion. The headline number from the OpenAI round closed on March 30. The check: $84B of it is committed to compute through 2028. Of the remaining $38B, $14B is research payroll, $9B is product surface area, and the rest is a combination of partnerships, indemnities, and the working-capital cushion the company has been running thin on since the GPT-5 launch quarter.

These are not my numbers. They are the numbers that have been moving around in three sets of investor materials I have now seen versions of, two of them on the record. They reconcile to within 3% across the three documents. Here is the math.

The compute commitment, by the unit cost

H200 contract pricing for tier-one customers: $1.81/GPU-hour at three-year reservation, $2.40 on twelve-month, $3.60 spot. OpenAI pays the three-year tier on most of its committed capacity. $84B / $1.81/h = 46.4 billion GPU-hours. Spread across 2026-2028, call it 5.3 million GPU-equivalents simultaneously deployed at peak — a number consistent with the buildout footprints disclosed in the Stargate filings.

The per-token economics that fall out of those numbers are tighter than the ChatGPT pricing page suggests. At three-year reserved $1.81/GPU-h and current GPT-5.5 batch-32 throughput of ~410 tokens/sec/GPU on transformer-only inference, the marginal cost of an output token is in the range of $0.18-$0.24 per million. The list price is $15. The gross margin is the headline.

$84B at three-year H200 contract pricing: 46.4 billion GPU-hours. The compute commitment is not a vibe; it is a spreadsheet.

What I do not know

Three numbers I cannot reconcile from public materials: the actual mix of H200 vs. Blackwell-Ultra inference (reportedly 60/40 by EOY 2026, but not from a primary source); the depreciation schedule the company is using for the chips it owns vs. the ones it leases (the difference is meaningful for the next earnings discussion); and the exact share of the $84B that flows through Microsoft's Azure billing rail vs. CoreWeave and Oracle. Everything else above is sourced or modelled with explicit assumptions. Watch for the EOY 10-K equivalent disclosure if the company moves to file.

Read next

Progress 0% ≈ 2 min left
Subscribe Daily Brief

Get the Daily Brief
before your first meeting.

Five stories. Four minutes. Zero hot takes. Sent at 7:00 a.m. local time, every weekday.

No spam. Unsubscribe in one click.