Supercomputing NewsBeta
AIHPCQuantumEmerging
Sign inSubscribe
Supercomputing News
Pillars
AI—HPC—Quantum—Emerging—
Sign inSubscribe
Supercomputing News
Supercomputing News

Trusted reporting on AI, HPC, Quantum, and the emerging technologies shaping the future of computing.

Pillars

  • Artificial Intelligence
  • High-Performance Computing
  • Quantum Computing
  • Emerging Technology

Publication

  • About
  • Topics
  • Privacy Policy
  • Terms of Use

SCN Weekly Update

The biggest stories in supercomputing, every Friday. No filler.

Start 30-day free trial
No credit card required
© 2026 Supercomputing NewsBuilt on Payload + Next · USDC on Base
Artificial IntelligenceAIAnalysis

Nebius's $50 Billion Sells Out. Public Science Gets None of It.

Nebius's $50B backlog: 94% to Meta and Microsoft, zero to NAIRR, CloudBank, or DOE Genesis. The largest neocloud sells out before science can access it.

Aerial view of a gigawatt-scale AI data center campus under construction. Rows of server buildings, an on-site electrical substation with transformers and transmission lines, and ongoing construction work illustrate the commercial AI factory buildout this article documents.
Gigawatt-scale AI infrastructure is being built for commercial customers at unprecedented pace. Nebius alone holds more than 3.5 gigawatts of contracted power and is targeting more than 4 gigawatts by year-end 2026. AI-generated illustration; not a depiction of a specific Nebius facility.AI-generated / SCN
SCN Staff
Staff Editor
Published
May 13, 2026
Reading0%

Nebius Group N.V. has contracted approximately $50 billion in customer backlog. Meta Platforms accounts for $27 billion over five years. Microsoft accounts for $19.4 billion. NVIDIA took a $2 billion equity stake in March 2026. Together, Meta and Microsoft represent 94 percent of disclosed backlog, with the remaining $4 billion in contracted revenue undisclosed by customer. Nebius provided no disclosure regarding allocations to the NSF NAIRR Pilot, NSF ACCESS research computing federation, NSF CloudBank 2.0, DOE Genesis Mission, university research programs, or any non-commercial access channel.

The company reported record Q1 2026 results on May 13, with revenue of $399 million up 684 percent year-over-year, adjusted EBITDA swinging from a $53.7 million loss to a $129.5 million profit, and annualized run-rate revenue reaching $1.92 billion as of March 31. The company simultaneously raised full-year 2026 capital expenditure guidance to $20 to $25 billion from a prior range of $16 to $20 billion and announced it had secured up to 1.2 gigawatts of power and land for a second owned United States AI factory in Pennsylvania, with delivery starting 2027. A 1.2 gigawatt facility in Independence, Missouri, broke ground on May 12.

For research computing directors evaluating commercial cloud alternatives to on-premises supercomputing systems, the Nebius capacity allocation pattern is a procurement signal. The largest disclosed neocloud backlog is sold out to two hyperscaler customers before academic computational scientists can access it. That is the data point this article documents.

What Nebius is and where it sits

Nebius Group N.V. is a Dutch public limited liability company with corporate seat in Amsterdam, registered in the Dutch trade register under number 27265167, and listed on Nasdaq Global Select Market under ticker NBIS. It is the post-divestiture successor to Yandex N.V., retaining approximately 1,000 former Yandex employees, with research and development operations across Europe, North America, and Israel. Founder Arkady Volozh remains chief executive officer. The company relisted on Nasdaq in October 2024 following the Yandex corporate restructuring.

Nebius pivoted fully to AI infrastructure in August 2024 following the Yandex spin. Revenue has scaled from $11.4 million in Q1 2024 to $399 million in Q1 2026, a 35-times increase in eight quarters. AI cloud business segment revenue grew 841 percent year-over-year in Q1 2026. Adjusted EBITDA margin in the AI cloud segment expanded to 45 percent in Q1 2026 from 24 percent in Q4 2025, indicating strong unit economics at current pricing. Chief Financial Officer Dado Alonso stated on the May 13 earnings call that more than 90 percent of the prior $16 to $20 billion CapEx range was already covered by existing liquidity and contractual commitments. The incremental $4 to $5 billion will be financed through asset-backed debt secured against the Microsoft and Meta contracts, corporate bonds, and customer prepayments. Chief Product and Infrastructure Officer Andrey Korolenko characterized the CapEx raise as reflecting 2027 demand visibility rather than cost inflation.

Nebius operates in the neocloud tier between specialized GPU providers and hyperscaler-adjacent infrastructure. CoreWeave retains the premium position with $5.13 billion in 2025 revenue and 2026 guidance above $12 billion per its SEC filings, and a ClusterMAX Platinum tier ranking as the sole member per SemiAnalysis. Lambda Labs targets researchers with 1-Click Clusters and a developer-focused user experience. Crusoe Energy differentiates via flare-gas powered sustainability claims. Nebius's $50 billion contracted backlog and 3.5 gigawatts of contracted power position it as a meaningful participant in the broader AI infrastructure buildout. Hyperscalers Microsoft, Alphabet, Amazon, Meta, and Oracle collectively plan $660 to $690 billion in 2026 capital expenditure, with approximately 75 percent or roughly $450 billion AI-infrastructure specific, per Futurum Group projections. Nebius's $20 to $25 billion CapEx sits at 3 to 4 percent of total hyperscaler AI spending.

What is contracted versus what is operational

Contracted capacity and operational capacity are not the same. As of Q1 2026, Nebius has connected power of approximately 170 megawatts across five facility locations. Contracted power exceeds 3.5 gigawatts, surpassing the prior 2026 year-end target of 3 gigawatts. The year-end contracted power target was raised to more than 4 gigawatts. Connected power is expected to reach 800 megawatts to 1 gigawatt by year-end 2026, with owned capacity representing more than 75 percent of the total.

Operational facilities include the United Kingdom Surrey Ark Longcross site, where 4,000 NVIDIA Blackwell Ultra GPUs were deployed in Q4 2025 at 16 megawatts active capacity expanding to 54 megawatts. Ark Data Centres, the facility operator, stated in its announcement that the site serves NHS England, UK startups, and unnamed research institutes. Neither Nebius nor Ark have identified which research institutes, what allocation scale they receive, or under what access terms. A 300-megawatt facility in Vineland, New Jersey, built in partnership with DataOne, began first-phase operations in mid-2025 and is delivering initial capacity to Microsoft as part of the $19.4 billion contract for access to more than 100,000 NVIDIA GB300 chips. Seven more capacity tranches are scheduled for delivery through end of 2026 per management commentary. A 5-megawatt Kansas City facility with Patmos is live and expandable to 40 megawatts. A 10-megawatt Keflavik, Iceland cluster is operational. A 310-megawatt site in Finland is characterized by Nebius as one of Europe's largest dedicated AI factories. Israel and France colocation footprints also exist.

The Meta $27 billion commitment is structured as $12 billion in dedicated capacity for one of the first large-scale NVIDIA Vera Rubin platform deployments, with delivery commencing early 2027, plus up to $15 billion in additional compute capacity that Nebius can sell to Meta or to its own AI cloud customers. The Vera Rubin capacity is a forward commitment, not operational infrastructure. NVIDIA announced Vera Rubin entered full production in Q1 2026, nearly two quarters ahead of schedule, which mitigates timeline risk but does not eliminate large-scale deployment risk specific to a 100,000-plus GPU rollout to a single customer. Management commentary on the May 13 call described all current-generation Blackwell and H200 capacity as sold out, pricing as strengthening, and pipeline as up 3.5 times quarter-over-quarter.

The new United States owned facilities announced May 12 to 13 are forward capacity. The Independence, Missouri site spans approximately 400 acres at 1.2 gigawatt scale and is expected to create 1,200 construction jobs and 130 permanent positions. The Pennsylvania site secures up to 1.2 gigawatts of power with delivery starting 2027. A Birmingham, Alabama AI factory was announced in February 2026. All three sites are positioned as commercial infrastructure with no disclosed public science capacity. Nebius stated capacity plans but provided no independent performance benchmarks, no MLPerf results, no customer-published training benchmarks, and no third-party validated power usage effectiveness figures for operational facilities.

The public science access landscape and where Nebius is absent

Four established channels exist for academic and research AI compute access in the United States. The NSF NAIRR Pilot, led by the National Science Foundation in partnership with 13 federal agencies and 28 nongovernmental partners, has supported approximately 600 research projects and 6,000 students across all 50 states since 2024. Industry partners contributing compute resources include Microsoft, which initially contributed $20 million in Azure credits, NVIDIA, which initially contributed $30 million in technology contributions including DGX Cloud, plus AWS, Google, IBM, Intel, HPE, OpenAI, Anthropic, SambaNova, Groq, Hugging Face, Vocareum, and AI2. Typical NVIDIA DGX Cloud NAIRR allocations are 32-node clusters with 256 NVIDIA A100 GPUs for three-month durations, as documented at Stanford Das Lab for RNA folding research and Harvard Medical School for 1.7 million protein-protein interaction predictions.

NSF ACCESS, the post-XSEDE program, federates academic supercomputing centers including NCSA, PSC, SDSC, TACC, and peers. Commercial neoclouds are not ACCESS resource providers. NSF CloudBank 2.0, the NSF commercial cloud channel funded for approximately 500 research projects annually over five years, partners with AWS, Google Cloud, IBM Cloud, Microsoft Azure, and NVIDIA DGX Cloud. The program is led by the San Diego Supercomputer Center and Information Technology Services Division at UC San Diego, in partnership with UC Berkeley's College of Computing, Data Science, and Society and the University of Washington's eScience Institute.

DOE Genesis Mission, announced in late 2025 to advance discovery science and national security AI workloads, includes CoreWeave as a participant, which joined in December 2025. CoreWeave also hosts the Chan Zuckerberg Initiative 1,024 NVIDIA H100 DGX SuperPod biomedical research cluster, which awards 96 GPU minimum allocations to academic biomedical researchers via competitive request for applications. Lambda Labs operates a Research Grant program offering up to $5,000 in compute credits per academic researcher. Crusoe and Nebius are absent from all four channels.

The structural pattern is that hyperscalers and AI model companies participate in NAIRR and CloudBank 2.0 at material program scale. The neocloud category as a whole is structurally thin in public research access. CoreWeave makes the strongest neocloud gesture via Genesis Mission and the CZI cluster hosting. Lambda's grant program operates at marketing scale. Nebius makes no gesture and is the largest neocloud by disclosed contracted backlog.

Typical NAIRR allocations of 256 A100 GPUs for three months are material for individual research projects but a rounding error against Nebius's 4,000-GPU UK deployment or the Meta $12 billion forward commitment. The structural contrast is not that Nebius excludes research while peers serve research at parity scale with their commercial operations. The contrast is that public research compute is structurally thin across the entire AI factory buildout, and Nebius is not making the gesture that peers are making.

Strategic logic, sovereignty, and supply chain consequences

Customer concentration is an execution risk. Meta and Microsoft account for approximately $46.4 billion of approximately $50 billion in disclosed backlog. Nebius provided no customer names beyond the two hyperscalers and no breakdown of the remaining $4 billion. For a company guiding $3.0 to $3.4 billion in 2026 revenue and a $7 to $9 billion annualized run rate by year-end, customer diversification beyond two accounts is material. The contracted versus speculative capacity split within the raised $20 to $25 billion CapEx is partially disclosed. The CFO stated more than 90 percent of the prior $16 to $20 billion range was covered by existing liquidity and contractual commitments. The incremental $4 to $5 billion is to be financed against the Microsoft and Meta contracts. The implication is that roughly 90 percent of the lower bound is pre-sold, but the marginal capacity decisions for late 2026 and 2027 are commercial-only and positioned for additional hyperscaler or large enterprise customers, not research access.

The corporate domicile is material context. NAIRR, ACCESS, and CloudBank 2.0 industry partnerships have been structured around US-incorporated providers. Nebius Group N.V. is a Dutch public limited liability company incorporated under Dutch law. The Pennsylvania, Missouri, and Birmingham facilities are US-owned gigawatt-scale infrastructure sitting in US jurisdictions and eligible in principle for NSF and DOE partnerships. The corporate structure removes one potential jurisdictional barrier. The absence of public science allocations after the US facility announcements is a willingness signal, not a structural barrier.

The sovereignty dimension extends relevance to UK, EU, and Israeli research computing leadership evaluating whether Amsterdam-domiciled neocloud capacity can serve as a sovereign alternative to US hyperscaler dependency. Nebius's operational European footprint in the UK, Finland, and other jurisdictions could in principle serve as accessible capacity for European research programs. The UK Ark Longcross facility's unnamed research institute users are the only disclosed non-commercial allocation in the entire Nebius portfolio. The scale, terms, and institutional identities of those allocations are undisclosed.

Process node, fabrication partners, and silicon supply chain details remain undisclosed. Nebius identified GPU models including Blackwell, Blackwell Ultra, H200, GB300, and Vera Rubin but not TSMC process nodes, packaging technology, or supply chain risk mitigation strategy. CoWoS allocation constraints were a 2025 industry-wide bottleneck. For a company committing to deliver 100,000-plus GB300 chips to Microsoft and a Vera Rubin deployment to Meta starting early 2027, silicon supply risk is material and was not addressed in May 13 materials.

Bottom line

Nebius's $50 billion contracted backlog with zero disclosed public science allocations is not an accident of market segmentation. It is a data point in a structural pattern. AI factory capacity is being contracted to commercial customers before academic computational scientists can access it. Among major neoclouds, CoreWeave participates in DOE Genesis Mission and hosts the CZI biomedical research cluster. Lambda offers a grant program. Nebius, with the largest disclosed neocloud backlog, offers nothing. The Pennsylvania, Missouri, and Birmingham gigawatt-scale US facilities remove the jurisdictional barrier to NSF and DOE partnerships. The absence of such partnerships after those announcements is a choice, not a constraint.

For research computing directors, the Nebius pattern clarifies the procurement landscape. Neocloud capacity at gigawatt scale is being built for hyperscaler customers, not for NSF CloudBank allocations or competitive research access. The $50 billion sell-out to two customers with no research allocations is the empirical case.

AI InfrastructureData Center InfrastructureResearch ComputingHyperscaler StrategyGenesis Mission
AI disclosure
AI-assisted research and first draft. This article has been verified by a human editor.
Related reading
AI · AnalysisApple's Mac Shortage Signals Memory Supply Chain Has Reorganized Around Data Center AIAI · NewsMRC Gives Open Ethernet Its First 75,000-GPU Production Proof PointEmerging · AnalysisOrbital Compute in 2026: What Has Flown, What Is Slideware, and What the Physics Allows