Power has replaced transistor density as the binding constraint on computing growth. The consequences are reshaping everything from real estate markets to geopolitics.

JLL’s 2026 Global Data Center Outlook contains a number that should stop anyone in the computing industry cold: 100 gigawatts of new data center capacity is projected to come online between 2026 and 2030. That’s $1.2 trillion in real estate value alone, before you account for the $1–2 trillion in IT equipment going inside those buildings.
One hundred gigawatts. For reference, the entire US nuclear fleet generates about 95 gigawatts. We’re building the equivalent of the country’s nuclear power infrastructure, from scratch, in five years, to run AI workloads.
This isn’t a forecast from a consulting firm chasing fees. It’s a count of facilities already in planning, permitting, or construction. JLL tracks 770 future hyperscale facilities in the pipeline. Wells Fargo’s infrastructure team projects hyperscaler capacity doubling from 49 gigawatts in 2025 to 98 gigawatts by 2027. The top five hyperscalers alone have committed $710 billion in capex for 2026, supporting 35 gigawatts of new or refreshed capacity.
The constraint on computing has shifted. It’s no longer about how many transistors you can fit on a chip. It’s about how many watts you can deliver to a building.
For fifty years, the supercomputing community lived by Moore’s Law: transistor density doubles every two years, performance follows. That cadence shaped processor design, system architecture, facility planning, and career trajectories. When your performance roadmap is driven by lithography improvements, you plan around foundry timelines.
The new cadence is driven by power. How fast can you get grid interconnection approval? How quickly can a natural gas plant be built? Can you get a nuclear site permit in less than a decade? These are bureaucratic, regulatory, and geological questions, and they now determine computing growth more directly than any semiconductor roadmap.
NVIDIA’s Vera Rubin platform delivers roughly 5x the inference performance of Blackwell. But each NVL72 rack still consumes enormous amounts of power. Faster chips don’t reduce the power problem; they make it worse, because the performance gains drive demand for more racks, not fewer. Every generation of AI silicon that delivers better performance per watt gets deployed at higher total wattage, because the workloads expand to consume whatever capacity exists.
That’s the paradox at the center of the data center supercycle: efficiency improvements don’t reduce power consumption. They increase it. Jevons paradox, applied to AI infrastructure. Peer-reviewed research has confirmed this dynamic holds for AI specifically: as AI becomes cheaper and more accessible, total usage explodes, overwhelming individual efficiency gains.
The data center geography is shifting, and power availability is the primary reason. Virginia’s Loudoun County has been the global epicenter of data center construction for two decades, anchored by the MAE-East internet exchange and proximity to federal customers. But Virginia’s power grid is hitting limits.
JLL’s North America report puts it directly: Texas is preparing to dethrone Virginia as the global data center leader. The reason is power. Texas has abundant natural gas, a deregulated electricity market, and available land. Critically, it also has a grid operator (ERCOT) that, despite its troubles during Winter Storm Uri in 2021, moves faster on interconnection approvals than PJM, the regional transmission organization serving Virginia and the mid-Atlantic.
PJM is already developing “connect-and-manage” rules that would curtail data centers that don’t bring their own power supply. The grid operator serving the world’s current data center capital is telling new facilities they may face power curtailment unless they arrange their own generation. That’s not a market signal. It’s an alarm.
Other emerging markets: central Ohio (cheap power, available land, AEP’s cooperative grid operator), the Nordics (renewable power, cool climate), and parts of the Middle East and Southeast Asia where sovereign wealth funds are bankrolling data center construction with attached power generation.
Building 100 gigawatts of data center capacity requires physical materials at a scale that collides with mining economics and geopolitics.
Copper. Each megawatt of data center capacity requires roughly 20–40 tons of copper for power distribution, grounding, and networking. A hundred gigawatts means 2–4 million metric tons of copper. Global copper production is approximately 22 million metric tons per year, and data centers are competing with electric vehicle manufacturing, grid upgrades, and construction for that supply.
Silver. Power electronics, solar panels (for on-site generation), and some networking components use silver. Data center demand is a growing fraction of industrial silver consumption.
Rare earth elements. Permanent magnets in cooling system motors, UPS flywheels, and generator sets require neodymium and dysprosium. The supply chain for these materials runs predominantly through China.
Palladium and platinum. Used in hydrogen fuel cells (a growing backup power technology for data centers) and certain electronic components.
The materials supply chain for the data center supercycle isn’t a future problem. It’s a present constraint that’s already showing up in procurement timelines. Lead times for high-voltage switchgear have stretched from months to over a year. Transformer procurement is running 18–24 months in many markets.
In February, hyperscalers signed a White House pledge to fund power grid upgrades associated with their data center construction. Amazon, Google, Meta, Microsoft, OpenAI, Oracle, and xAI signed the Ratepayer Protection Pledge, committing to build, bring, or buy their own power generation rather than pass infrastructure costs to ratepayers. This is a meaningful shift. Historically, the cost of grid infrastructure—transmission lines, substations, transformers—has been socialized across all ratepayers. The pledge acknowledges that data center load growth is concentrated enough, and growing fast enough, that the traditional cost-sharing model doesn’t work.
What’s emerging is a hybrid model where hyperscalers function as quasi-utilities. They’re signing power purchase agreements directly with generators. They’re building on-site generation (natural gas, nuclear microreactors, large-scale solar). They’re funding transmission infrastructure that benefits the broader grid but is motivated by their own load requirements.
Microsoft’s deal with Constellation Energy to restart Three Mile Island’s Unit 1 reactor was the first high-profile example. Amazon’s partnership with Talen Energy at the Susquehanna nuclear plant followed. Google has signed agreements with multiple nuclear developers, including Kairos Power for small modular reactors.
These deals make economic sense for the hyperscalers: reliable, carbon-free baseload power at predictable prices. But they also create a two-tier power market. Hyperscalers with deep pockets get priority access to the best power sources. Everyone else competes for what’s left.
The data center supercycle has specific implications for the traditional HPC community: national labs, universities, and research institutions that have historically operated the world’s most powerful computers.
Power allocation. If grid operators implement connect-and-manage rules that prioritize facilities with their own generation, publicly funded HPC centers—which typically draw from the grid without dedicated generation—could face reliability concerns. Oak Ridge draws from the Tennessee Valley Authority grid. Argonne draws from the PJM grid. Neither has dedicated generation at the scale the hyperscalers are building.
Talent competition. The data center supercycle is creating massive demand for electrical engineers, cooling system specialists, facility operators, and construction trades. These are the same skills needed to build and operate HPC facilities. National labs and universities already struggle to compete with private-sector compensation; the supercycle widens the gap.
Supply chain priority. When a hyperscaler orders 50,000 GPUs, that order gets priority over a national lab ordering 5,000. NVIDIA has been relatively good about maintaining allocation for public-sector customers, but the pressure to prioritize revenue from commercial buyers is real and growing.
The data center supercycle isn’t bad for HPC. The innovations it drives (better cooling, more efficient power distribution, advanced packaging) will benefit supercomputing. But the scale differential is accelerating. The gap between the largest private AI installation and the largest public supercomputer is growing, not shrinking.
A hundred gigawatts of new capacity gets built only if the economics hold. If AI inference demand grows as projected, if agentic AI workloads consume compute at the rates analysts predict, if enterprise adoption accelerates, then the buildout looks prescient. The power gets consumed. The facilities fill up. The materials get used.
If demand disappoints, we have a lot of empty buildings connected to a lot of power that could have been used for other things. And a lot of grid infrastructure investments that ratepayers will be paying for regardless.
The HPC community has seen infrastructure bubbles before. The fiber optic buildout of the late 1990s created massive overcapacity that took a decade to absorb. But it also created the physical infrastructure that made cloud computing possible. Even overcapacity can be productive, given enough time.
The difference this time: the power consumption is permanent. A dark fiber cable doesn’t consume energy. A powered-down data center still has property taxes, maintenance costs, and grid interconnection fees. The downside case for the 100-gigawatt supercycle isn’t wasted fiber. It’s wasted watts and stranded capital at a scale the industry has never contemplated.
Twelve quarters to double hyperscale capacity. One hundred gigawatts of new construction. The equivalent of a nation’s nuclear fleet, built in five years. That’s the bet the computing industry is making. The power bill comes due regardless of whether AI delivers.