Hardware has reached orbit and SpaceX has filed for a million-satellite constellation. Thermal physics, launch cadence, and bandwidth still push gigawatt orbital AI to post-2030, at best.

When Anthropic and SpaceX announced on May 6 that Anthropic would consume the entire 300 MW, 220,000-GPU output of the Memphis Colossus 1 site, one sentence in the release pointed somewhere else entirely. Buried beneath the terrestrial capacity headline: "As part of this agreement, we have also expressed interest in partnering with SpaceX to develop multiple gigawatts of orbital AI compute capacity."
That clause is not a procurement event. But it is one of the clearest customer-side signals yet that a frontier model developer is treating orbital compute as a possible future supply lever rather than science fiction. Google had already gone there with Project Suncatcher, publishing radiation-test results on Trillium TPU v6e and announcing a planned early-2027 prototype launch with Planet. The Anthropic line lands on top of six months of FCC filings, prototype launches, and capital commitments that merit scrutiny on their own. The case for orbital data centers now rests on hardware, money, and physics. So does the case against. Both should be read as engineering positions rather than science fiction.
The question is not whether orbital compute is real. Lonestar Data Holdings flew a physical data center payload to cislunar space in February 2025 and reports it met its technical and commercial milestones. Starcloud put an NVIDIA H100 in low Earth orbit in November 2025 and trained a small language model on it the following month. Those are not simulations. They are company-reported demonstrations on physical hardware in space.
The harder question is whether anything that has flown can scale to the gigawatt-class systems that Elon Musk, Sundar Pichai, and Starcloud's Philip Johnston now describe in public, and whether the timelines they offer survive contact with the Stefan-Boltzmann law and the Falcon 9 manifest. The orbital pitch lives or dies in comparison to a terrestrial environment in which the data center supercycle is now a 100-gigawatt-class problem rewriting energy policy. That comparison is what the physics and economics actually have to address.
Anthropic's deal is the latest move in a pattern that has been building since late 2025. The company has spent the past six months pre-purchasing terrestrial compute on a scale that supercomputing professionals at national labs are not in a position to match. The same week of the SpaceX announcement, Anthropic's existing 3.5 GW Google TPU commitment through 2031, itself roughly twenty to thirty times the power envelope of the Department of Energy's largest planned science supercomputer, established the ceiling for what commercial AI is willing to spend on infrastructure that scientific computing will eventually need to share or compete for. The orbital line in the SpaceX release belongs in that context. It is one more data point in the commercial pre-purchasing of capacity at a scale that makes terrestrial siting an open question.
SpaceX filed plans with the FCC on January 30 for an Orbital Data Center System sized at up to one million satellites, with no published deployment schedule and a milestone-waiver request attached. The FCC has accepted the application for filing; it has not approved it. Amazon filed objections, and the docket has accumulated nearly 1,500 public comments. Musk has separately floated vertically integrated chip production for future orbital compute, including a $20 billion-class Tesla/SpaceX semiconductor facility branded Terafab in Texas, but the public record there is still closer to executive claim and media reporting than to a fully disclosed factory, chip roadmap, or flight-qualified processor program.
Starcloud, formerly Lumen Orbit, NVIDIA Inception–backed, with capital from In-Q-Tel, A16z and Sequoia scout funds, and NFX, closed a $170 million round in March 2026 and reportedly filed for an 88,000-satellite constellation in February. Its second flight, planned to carry Blackwell-class GPUs, is targeted for October. Lonestar is taking capacity reservations for its first StarVault launch, marketed as the world's first commercial space-based data storage service, the same month. Google's Project Suncatcher will fly two TPU prototype satellites with Planet in early 2027. Aetherflux, founded by Robinhood co-founder Baiju Bhatt and reportedly raising a Series B at a $2 billion valuation led by Index Ventures, has expanded from space-based power-beaming into orbital compute. Its data-center node, called Galactic Brain, builds on the company's earlier laser-power-transmission work rather than replacing it, and targets a first commercial node in Q1 2027.
That is the field as of May 2026: at least two physical orbital/cislunar compute or storage demonstrations on the books, two near-term prototype launches, one EU sovereignty consortium, one Japanese telecom-led JV that has been planning since 2021, and a SpaceX filing whose scale exceeds the global active satellite population by orders of magnitude.
Program | Status | Architecture | Milestone |
|---|---|---|---|
Lonestar StarVault | Demonstrated | Cislunar data storage | Flew Feb 2025; reservations open for first commercial StarVault launch |
Starcloud SC-1 | Demonstrated | NVIDIA H100 in LEO | Flew Nov 2025; trained a small LLM in orbit; Blackwell-class second flight targeted Oct 2026 |
Google Suncatcher | Prototype announced | Trillium TPU v6e on shared bus with Planet | Early 2027 prototype launch (two TPU satellites) |
Aetherflux Galactic Brain | Prototype announced | LEO compute + laser power beaming | Q1 2027 first commercial node |
SpaceX ODCS | FCC-filed | LEO constellation, up to 1M satellites | Filed Jan 2026; accepted for filing; ~1,500 public comments; Amazon petition to deny |
ASCEND (EU) | Feasibility study | Sovereign EU orbital data center | Pre-2030 study; Thales Alenia–led with Airbus, HPE, Orange, DLR, ArianeGroup, CloudFerro |
Space Compass (NTT × SKY Perfect JSAT) | Long-term planning | Space Integrated Computing Network with photonics-electronics convergence | In motion since 2021; original 2025 commercial target slipped |
The case for orbital compute begins with power. A satellite in a dawn-dusk sun-synchronous orbit at 500–650 km sees nearly continuous sunlight, against 20–40 percent capacity factor for a comparable terrestrial solar installation. Google's own Suncatcher analysis estimates solar panels deliver up to eight times the productivity in orbit that they do on the ground. There is no weather, no night cycle, no grid interconnect, no permitting fight. For a compute architecture whose binding constraint is power delivery, the question that has driven every gigawatt-scale AI factory siting decision of the past two years (including the cases where national grids have begun to refuse new interconnects outright), that is a real advantage rather than a marketing claim.
The case against begins with the second law of thermodynamics in vacuum. There is no atmospheric convection, and no useful conduction path to the environment. Nearly every joule delivered to the compute stack eventually becomes heat that must leave through radiation, governed by Stefan-Boltzmann: Q = εσA(T⁴ − T_env⁴). The International Space Station's external Active Thermal Control System is a useful reality check: roughly 70 kW of heat rejection over about 422 m² of radiator area, or about 166 W/m² in practice, well below the theoretical maximum once solar loading and system inefficiencies are accounted for. Scaled naïvely from that empirical figure, a 350 W H100 PCIe card needs about 2.1 m² of radiator area. A full DGX H100-class node at NVIDIA's 10.2 kW maximum input power needs about 61 m² before accounting for spacecraft structure, power conversion, coolant loops, radiator orientation, solar loading, or redundancy. Even counting only the eight 350 W PCIe cards in the chassis gives roughly 17 m² before the surrounding system mass enters the calculation.
How that scales is contested, and the disagreement is the interesting part. The arXiv paper most often cited as the first-principles reference (Turyshev, April 2026) concludes that "thermal closure is a radiative-area problem because vacuum provides no convective heat sink… MW-class orbital compute tends to reside in the tens-of-kg/kW regime even under optimistic assumptions." His 1 MW base case requires on the order of 2,500 m² of radiator area before the rest of the spacecraft mass budget is allocated. Better radiator materials, higher coolant temperatures, two-sided deployables, heat pipes, pumped loops, and careful orbital orientation can improve those numbers substantially: a March 2026 Mach33 Space Intelligence analysis argues this is "not a fundamental physics barrier at the 100 kW class," noting that the T⁴ relationship makes operating temperature a powerful lever (doubling absolute temperature reduces required area by 16x), and a two-sided deployable radiator facing both zenith and nadir effectively doubles area for the same mass. But those levers run into electronics reliability, packaging, pump efficiency, fluid selection, and hard upper bounds on what conventional GPU coolant loops can tolerate. Some secondary analyses have circulated much smaller radiator-area estimates for gigawatt-class systems, but those numbers either assume radiator temperatures well above conventional electronics-cooling regimes or conflate megawatt and gigawatt cases. Scaling the Turyshev assumptions to 1 GW points toward millions of square meters of radiator area, not thousands.
Read together, the picture is coherent. Thermal is an engineering problem at 100 kW. It is a system-architecture problem at the megawatt class. At gigawatt scale it becomes a materials and manufacturing problem that has not been solved. Liquid Droplet Radiators, which generate microscopic droplets that radiate heat in transit and have been studied as a candidate for step-change improvement, are the most credible step-change technology under discussion, but they have no flight heritage in this application. None of this is theoretically blocked. None of it is engineered today.
Radiation tolerance is closer to a solved problem than the headlines suggest. Starcloud-1 has now operated a commercial H100 in orbit for months. Google has tested Trillium TPU v6e against the ~750 rad mission budget for a five-year sun-synchronous orbit, with HBM memory showing irregularities at roughly 2 krad and other components testing higher. The margin is adequate overall, with caveats by component. The harder question is failure rate at multi-GPU coherence: independent space engineer Milo Knowles estimates GPU failure at roughly 19 percent over three years in orbit, which compounds to an ~81 percent probability that at least one GPU in an eight-GPU training node will fail before the node's nominal lifetime. Inference on a single accelerator is robust to that arithmetic. Distributed training across thousands of GPUs is not.
Latency, which has dominated the public-facing skepticism, is largely a misread. Starlink-class LEO round-trip times measure 25–45 ms today, with Starlink reporting median peak-hour US latency around 33 ms and a goal of 20 ms. That is competitive with intercontinental fiber on specific routes; Hibernia Express advertises a 58.95 ms New York–London round trip. The 700 ms figure that surfaces in critical commentary is geostationary orbit, not LEO. The real bandwidth constraint is inter-satellite optical links, which Google notes are typically in the 1–100 Gbps range commercially, against the hundreds of Gbps to multiple Tbps of fabric inside a modern AI training cluster. Starcloud's CEO is direct about the implication: "Training is not the ideal thing to do in space. I think almost all inference workloads will be done in space."
The economics are not close today. A back-of-envelope calculation by Andrew McCalip puts a 1 GW orbital data center at approximately $42.4 billion, roughly three times terrestrial. The dominant variable is launch cost. Falcon 9's historically advertised LEO cost is about $2,720/kg, and the gigawatt-class case requires that figure to drop by an order of magnitude. Google's Suncatcher preprint, the most rigorous published timeline, argues that if Starship-class launch prices fall to roughly $200/kg by the mid-2030s (under aggressive learning-rate assumptions and a modeled cadence near 180 Starship launches per year), launch amortization could become comparable to terrestrial energy costs for some architectures. That is a launch-cost crossover, not a proof that full orbital AI factories beat terrestrial data centers on all-in capex/opex accounting.
That is why launch cadence is the load-bearing variable in every serious orbital-compute thesis. Musk's public claim that "by far the cheapest place to put AI will be space in 36 months or less" requires Starship to fly at high cadence, Terafab to enter production, and deployable two-sided radiator architectures to mature, all simultaneously, on a three-year horizon. None of those programs is currently tracking against that schedule. Independent analyst Anna Jacobi places the realistic crossover at "somewhere around 2035, contingent on launch economics improving by an order of magnitude," noting bluntly that "the terrestrial power crisis is a 2026 problem." Prediction-market sentiment is currently skeptical of rapid Starship cadence, but that is sentiment, not evidence; the harder constraint is the engineering one, and it is not improving on a three-year clock.
A regulatory wildcard sits underneath the economics. State-level pauses on large AI data center developments have already begun to surface in the US, and the political appetite for federal-level moratoria is rising as grid impact becomes a constituent issue. If terrestrial gigawatt-class construction is restricted at the state or federal level, the relative attractiveness of orbital compute changes regardless of its raw cost curve. That is not a forecast. It is a reminder that the comparison is not stationary.
The story that most coverage of SpaceX's million-satellite filing has missed is vertical integration. SpaceX runs the launch manifest that any orbital deployment must use, has applied for orbital shells (500–2000 km, both sun-synchronous and 30° inclinations) at unprecedented scale, and is publicly developing its own captive chip supply through Terafab. No competitor today combines comparable launch cadence, constellation operations experience, and captive AI demand under one corporate roof. The closest parallel on launch cadence is no one. The closest parallel on rad-hard AI silicon at commercial volume is no one.
The European answer is ASCEND, an 11-partner consortium coordinated by Thales Alenia Space and including Airbus, HPE, Orange, DLR, ArianeGroup, and CloudFerro. The published feasibility study examines the environmental case for orbital data centers and frames the program around European digital sovereignty, with public materials describing a long-term path toward gigawatt-class capacity before 2050, not a near-term commercial deployment. The Japanese answer is NTT and SKY Perfect JSAT's Space Compass joint venture, planning a Space Integrated Computing Network with photonics-electronics convergence for radiation tolerance. It has been in motion since 2021 and has slipped past its original 2025 commercial target.
The US export-control regime that already governs terrestrial AI chips will extend to orbital systems whose physical location does not change their logical jurisdiction. The FCC docket on the SpaceX filing is the first place that conversation has been forced into the open, and the nearly 1,500-comment volume is a leading indicator of how contested the orbital shells will become before they are populated.
The use cases that work today are narrow and high-margin. Edge inference on satellite-born sensor data (synthetic-aperture radar imagery, wildfire detection, maritime tracking) cuts a large fraction of the downlink bandwidth that would otherwise carry raw frames to the ground, with operators reporting reductions in the high-eighties to mid-nineties percent depending on workload. That is the operational model behind Starcloud's contract with Capella Space and the rationale for several of the smaller filings. Sovereign data storage and disaster recovery is Lonestar's actual book of business; customers include the State of Florida, the Isle of Man government, the AI firm Valkyrie, and Imagine Dragons, which is a sentence that previously did not appear in any credible compute-economics document. Latency arbitrage on intercontinental routing is a real if specialized advantage where the LEO path beats fiber by tens of milliseconds.
What does not work today, and is not architecturally close to working, is large-scale AI training that requires tens of thousands of GPUs coherent at datacenter-class fabric bandwidth. It is not blocked by orbital launch economics. It is blocked by inter-satellite bandwidth and by the failure-rate compounding that turns multi-GPU node MTBF into a useful number on Earth and an unusable number in vacuum. Classical HPC workloads dominated by tight MPI coupling fall in the same category for the same reasons.
For supercomputing centers, meaning national labs, university research computing, the institutional buyers who decide procurement on five-year horizons, orbital compute is not a near-term competitor. It is also not a near-term complement. The center of gravity for that audience remains terrestrial AI factory infrastructure of the kind ORNL is now organizing institutional expertise around. The relevant orbital question is second-order: whether the orbital buildout, if it accelerates, drains GPU and rad-hard chip supply that scientific computing will need, raises launch costs for science satellites by absorbing Falcon 9 and Starship manifest, or attracts regulatory attention that reshapes terrestrial AI siting rules. Those second-order effects are real, and they are 2027–2030 questions. The first-order question, will my next allocation decision be affected by orbital capacity, is not.
The Anthropic line in the May 6 release should be read in that light. It is interesting because it signals that a frontier lab has begun to count orbital compute as a possible supply lever, not because it is an actual procurement. The fair read of the field as of May 2026 is that orbital compute is past proof-of-concept, generating real revenue in narrow applications, and well short of the production scale that would justify any of the headlines now attached to it. Thermal physics is tractable at 100 kW and gets ugly at the megawatt class. At gigawatt scale, the physics, the economics, and the launch manifest collectively do not yet support what is being claimed, not in 36 months, and on the most rigorous numbers, not in this decade.
That gap is the story to watch. Whether Anthropic is going to space is not.