Anthropic Locks 3.5 GW of Google TPU Capacity as Commercial AI Pre-Purchases Infrastructure Scientific Computing Will Need

Broadcom will supply Anthropic with 3.5 GW of Google TPU capacity through 2031; ~23-35x the power of DOE's largest planned science supercomputer.

The bottleneck that governs delivery: grid interconnection capacity that commercial AI and scientific computing now compete to access.
The bottleneck that governs delivery: grid interconnection capacity that commercial AI and scientific computing now compete to access.AI-generated / SCN

A Broadcom supply agreement commits multi-gigawatt chip capacity backed by $30 billion revenue run rate. The question is whether DOE supercomputer programs can access advanced packaging and grid infrastructure when commercial AI has already purchased it at scale.

Broadcom will supply Anthropic with approximately 3.5 gigawatts of Google Tensor Processing Unit capacity beginning in 2027, the chip company disclosed in an SEC filing April 6. The commitment is roughly 23 to 35 times the power of Solstice, the DOE Genesis Mission's largest planned science-oriented supercomputer, and surfaces the question of whether the U.S. government's scientific computing programs can access the same constrained resources when commercial AI is locking multi-year capacity commitments at unprecedented scale.

The Broadcom 8-K filing includes conditional language signaling infrastructure arrangements remain incomplete: "The consumption of such expanded AI compute capacity by Anthropic is dependent on Anthropic's continued commercial success. In connection with this deployment, the parties are in discussions with certain operational and financial partners." That contingency matters because the 3.5 GW commitment competes directly with Department of Energy Genesis Mission deployments and National Nuclear Security Administration stockpile stewardship systems for the same TSMC advanced packaging substrate, the same utility-scale grid connections, and the same multi-year transformer lead times.

Anthropic's self-reported annual revenue run rate now exceeds $30 billion (up from $14 billion in February and $1 billion in January 2025, all figures self-reported). That commercial revenue trajectory underwrites the commitment and positions Anthropic above OpenAI's estimated $25 billion run rate (per The Information and Sacra), though the comparison depends on matching methodologies across companies that disclose revenue differently.

Scale of the commitment relative to scientific computing infrastructure

Anthropic's 3.5 GW commitment represents approximately 23 to 35 times the power of Solstice, the Genesis Mission's largest planned science-oriented cluster. Solstice, scheduled for deployment at Argonne National Laboratory in 2027, will contain 100,000 Blackwell GPUs and draw approximately 100 to 150 megawatts. At 1,000 to 1,200 watts per Blackwell GPU plus roughly 50 percent cooling and infrastructure overhead, Solstice is expected to be the world's largest science AI supercomputer when operational.

The entire DOE Genesis Mission and NNSA advanced computing pipeline together represents a fraction of the compute capacity Anthropic is now locking through 2031:

System

Location

GPU/Chip Count

Power Draw (MW)

Timeline

Equinox

Argonne National Laboratory

~10,000 Blackwell GPUs

~15

2026

Lux

Oak Ridge National Laboratory

~10,000 AMD GPUs

~15

2026

Solstice

Argonne National Laboratory

100,000 Blackwell GPUs

100-150

2027

El Capitan

Lawrence Livermore National Laboratory

AMD (1.742 exaflops)

>20

Deployed 2024

ATS-5

Los Alamos National Laboratory

TBD

<20

Delivery late 2026/early 2027, deployment Aug-Sep 2027

ATS-6

Lawrence Livermore National Laboratory

TBD

TBD

~2030

If DOE Genesis Mission or NNSA supercomputer procurement timelines slip beyond stated 2026-2027 deployment targets and Anthropic's TPU deployment proceeds on schedule, the evidence will confirm that commercial AI has captured advanced packaging and power infrastructure priority over scientific computing. Both compete for the same bottlenecks: TSMC CoWoS advanced packaging capacity, utility-scale grid interconnection slots with multi-year lead times, and HBM3e memory supply. The U.S. government's science and national security computing priorities now compete with commercial AI companies pre-purchasing capacity backed by venture capital moving at venture speed, while government programs operate on appropriations cycles and subject-to-available-appropriations constraints.

What Google has not disclosed

Unlike AWS's Project Rainier, where Amazon disclosed the multi-state facility locations (Pennsylvania, Indiana, Mississippi) and $11 billion Indiana investment for its Trainium2 deployment, Google has not disclosed where the 3.5 GW of TPU capacity will be physically located or what grid interconnection arrangements have been secured. AWS's Indiana site spans 1,200 acres and will ultimately draw 2.2 GW. Amazon deployed nearly 500,000 Trainium2 chips for Anthropic in October 2025 as part of Project Rainier and separately committed 2 GW of Trainium capacity to OpenAI. Those commitments came with named locations, disclosed investment figures, and operational grid connections.

Google's disclosure gap is not limited to facility locations. The Broadcom filing does not state which TPU generation (v6, v7, or future architecture) the 3.5 GW commitment covers, though analyst coverage assumes TPU v7 based on timeline. The announcement does not specify the fabrication node, while NVIDIA publicly discloses that H100 uses TSMC 4N and H200 uses the same process. No named customer deployments or benchmarks are provided. The 8-K language "the parties are in discussions with certain operational and financial partners" suggests infrastructure arrangements for grid interconnection, transformer capacity, and substation upgrades remain incomplete.

Utility-scale grid interconnection at 3.5 GW requires power purchase agreements or direct utility contracts typically executed two to three years before load comes online. Google has general power purchase agreements totaling several gigawatts across various projects, including 1.17 GW with Clearway in Missouri, Texas, and West Virginia (disclosed in 2024), 1 GW with TotalEnergies in Texas (disclosed in 2024), and 1.5 TWh in Ohio (disclosed in 2024), but none have been specifically identified for the Anthropic TPU capacity deployment.

The physical constraints that govern delivery

Based on Fubon Securities estimates of 850 to 1,000 watts per chip with 64 chips per rack plus overhead, each TPU v7 rack draws 80 to 100 kilowatts including memory, interconnects, and distribution losses. At that rate, 3.5 GW represents 35,000 to 43,750 racks. XPU.pub calculated approximately 1 kilowatt per chip based on a Google statement that 9,216 chips need roughly 10 MW. Both independent sources confirm the 80 to 100 kW per-rack estimate.

Google is deploying +/-400 VDC power distribution with battery backup moved outside the IT rack to support up to 1 MW per rack, infrastructure that requires multi-year lead times for transformer capacity and grid interconnection. JLL's January 2026 Global Data Center Outlook identifies structural constraints including limited grid capacity, long interconnection queues, and multi-year lead times for heavy electrical equipment as the primary factors delaying projects. As of April 2026, some projects that secured land and financing in 2024-2025 remain stalled waiting for grid connections. Gartner's October 2024 report "Emerging Tech: Power Shortages Will Restrict GenAI Growth and Implementation" predicts 40 percent of existing AI data centers will be operationally constrained by power availability by 2027.

If Google does not announce grid interconnection agreements or named data center locations for the 3.5 GW Anthropic TPU deployment by September 2026, the 2027 online date will be at risk given typical 18 to 24 month interconnection timelines. Broadcom may deliver chips that cannot be powered.

The second binding constraint is TSMC's advanced packaging capacity. Google TPU, NVIDIA H100 and H200, AMD MI300, and AWS Graviton all compete for the same CoWoS substrate capacity. Fubon estimates total TPU production at 3.1 to 3.2 million units in 2026, constrained primarily by advanced packaging. If TSMC's CoWoS capacity does not expand beyond current levels by mid-2027, either Google TPU v7 shipments, NVIDIA H200 shipments, or AMD MI300 shipments will miss volume targets. The substrate capacity cannot serve all three at projected scale simultaneously. TSMC earnings calls in Q2-Q3 2027 with CoWoS capacity disclosures will test whether chip vendors' public commitments are physically achievable or whether they are demand-signaling announcements that assume competitor delays.

Broadcom CEO Hock Tan noted on the company's Q1 FY2026 earnings call (March 5, 2026) that Anthropic was consuming approximately 1 GW of compute in 2026. The 3.5 GW commitment triples that capacity.

What this means for research computing procurement and grid access

The consequence for research computing directors and procurement officers at national labs is immediate: if hyperscalers are securing multi-gigawatt commitments backed by commercial revenue while national labs face three to five year interconnection queues and appropriations-constrained budgets, procurement timelines and siting decisions for scientific computing infrastructure must account for the possibility that advanced packaging capacity and grid infrastructure have already been allocated to commercial AI through 2031.

The competitive context is not chip architecture. NVIDIA maintains dominance in shipped AI accelerators with H100 and H200 deployed at 700W TDP per GPU. The competitive differentiator is vertical integration of chip design (Google TPU architecture), ASIC implementation (Broadcom design services and SerDes), fabrication (TSMC), and cloud infrastructure (Google Cloud). Broadcom's role as Google's exclusive TPU implementation partner creates a single-source dependency that is both a strategic advantage (tight integration, IP control) and a potential bottleneck (no second-source for surge capacity).

Anthropic's self-reported $30 billion run rate has overtaken OpenAI's estimated $25 billion (per The Information and Sacra), though the comparison depends on matching methodologies across companies that disclose revenue differently. AWS already deployed Project Rainier with nearly 500,000 Trainium2 chips for Anthropic's Claude training in October 2025. The new TPU commitment adds Google infrastructure to Anthropic's multi-cloud strategy while AWS separately committed 2 GW of Trainium capacity to OpenAI. Meta's RSC infrastructure remains largely undisclosed in competitive intelligence.

The question the Broadcom filing surfaces is not whether commercial AI can afford this infrastructure. Anthropic's revenue trajectory demonstrates commercial AI can pay for it. The question is whether the physical infrastructure can be built fast enough to meet both commercial AI commitments and scientific computing mission requirements, or whether commercial AI has already purchased the capacity scientific computing programs assumed would be available when procurement cycles closed.

Bottom line

Anthropic's 3.5 GW TPU commitment through 2031, backed by a $30 billion revenue run rate, tests whether grid infrastructure and advanced packaging capacity can serve both commercial AI and scientific computing at the scale both sectors now require. If Google announces grid interconnection agreements and facility locations by Q3 2026, the commitment becomes operationally credible. If those disclosures do not materialize, the supply agreement timeline is at risk regardless of chip delivery. If DOE Genesis Mission or NNSA procurement timelines slip while commercial AI deployments proceed on schedule, the resource allocation question is answered with evidence: commercial revenue commands infrastructure priority over public science mission when both compete for the same constrained supply chain. Either outcome will clarify whether the AI infrastructure buildout is creating capacity scientific computing can access or whether commercial AI has pre-purchased the resource entirely.

What to watch

Grid interconnection and facility disclosure by Q3 2026. If Google does not announce named data center locations or utility agreements for the 3.5 GW Anthropic TPU deployment by September 30, 2026, the 2027 online date becomes questionable. Grid interconnection at that scale requires contracts and construction executed well in advance of chip deployment.

TSMC CoWoS capacity expansion by mid-2027. TSMC earnings calls in Q2-Q3 2027 will disclose whether advanced packaging substrate capacity has expanded enough to serve Google TPU v7, NVIDIA H200, and AMD MI300 volume shipments simultaneously. If capacity remains at current Fubon-estimated levels, at least one vendor will miss volume targets.

DOE Genesis Mission and NNSA procurement timelines through end of 2027. If government supercomputer deployments slip beyond stated 2026-2027 targets while Anthropic's TPU capacity comes online on schedule, the evidence will confirm that commercial AI captured advanced packaging and power infrastructure priority over scientific computing programs competing for the same supply chain bottlenecks.

🤖 AI Disclosure

AI-assisted research and first draft. This article has been verified by a human editor.