S
Supercomputing NewsBeta
AIHPCQuantumEmerging
Sign inSubscribe
Supercomputing News
Pillars
AI—HPC—Quantum—Emerging—
Sign inSubscribe
Supercomputing News
Supercomputing News

Trusted reporting on AI, HPC, Quantum, and the emerging technologies shaping the future of computing.

Pillars

  • Artificial Intelligence
  • High-Performance Computing
  • Quantum Computing
  • Emerging Technology

Publication

  • About
  • Topics

SCN Weekly Update

The biggest stories in supercomputing, every Friday. No filler.

Start 30-day free trial
No credit card required
© 2026 Supercomputing NewsBuilt on Payload + Next · USDC on Base
High-Performance ComputingAnalysis

Japan's Next Flagship Machine Abandons the Top500 Chase

FugakuNEXT pairs Fujitsu MONAKA-X CPUs with NVIDIA GPUs, ending Japan's all-Arm sovereign architecture and betting on throughput over benchmarks.

Interior of a Japanese supercomputing facility with open server rack showing GPU accelerators alongside a Fujitsu CPU module, representing FugakuNEXT's hybrid architecture pairing MONAKA-X CPUs with NVIDIA GPUs over NVLink Fusion.
A next-generation supercomputing facility interior representing the hybrid CPU-GPU architecture at the core of Japan's FugakuNEXT program, pairing Fujitsu MONAKA-X processors with NVIDIA GPU accelerators.AI-generated / SCN
SCN Staff
Staff Editor
Published
Apr 27, 2026
Reading0%

When RIKEN, Fujitsu, and NVIDIA completed the basic design for FugakuNEXT in March 2026, they published a machine that does not chase the Top500 list. The system targets more than 2.6 exaflops FP64 aggregate peak performance and approximately 600 exaflops FP8 sparse (the "zettascale" figure appearing in marketing materials), within the same roughly 40-megawatt power envelope that Fugaku operates under today. The precision distinction matters: FugakuNEXT's FP64 figure represents approximately 6x the hardware performance of Fugaku's 442-petaflop HPL score, delivered in the same power budget. The 600-exaflop number requires FP8 sparse precision and cannot be compared directly to Fugaku's double-precision baseline. Satoshi Matsuoka, director of RIKEN's Center for Computational Science since 2018 and leader of the original Fugaku program, told Nikkei xTECH in April 2026 that achieving world-class ranking was not the goal. FugakuNEXT would be a used machine, optimized for the scientists and AI workloads that will actually run on it.

The architectural choice is published. Fujitsu MONAKA-X CPUs (an Arm-based evolution of the MONAKA processor built on 2nm technology with 144 cores and extended SIMD functionality) will be paired with NVIDIA GPUs via NVLink Fusion. NVLink Fusion embeds NVLink chiplets into the MONAKA-X CPU die, giving the CPU NVLink ports that connect to NVSwitch switches and allowing the CPU and GPUs to share a unified memory fabric, much as NVIDIA's own Grace CPUs do. This is the first time Japan's flagship supercomputer uses GPUs as accelerators. NVIDIA will lead GPU infrastructure design; Fujitsu leads overall system and CPU design. Based on RIKEN planning documents analyzed by NextPlatform, the node architecture is estimated at approximately 2 MONAKA-X CPUs and 4 GPU accelerators per node, with more than 3,400 nodes, implying approximately 13,600 total GPU sockets in FugakuNEXT. These figures are derived from translated spec tables and remain subject to revision during the detailed design phase beginning in fiscal year 2026.

Fugaku debuted at number one on the June 2020 TOP500 list with an all-Arm architecture based on Fujitsu's A64FX processor, holding the top position for four consecutive editions through the November 2021 list. The A64FX was domestically fabricated and represented Japan's bid for an all-domestic flagship architecture with no GPU accelerators. FugakuNEXT keeps Fujitsu as CPU vendor and system integrator, and the system remains a Japanese national asset operated by RIKEN, but the GPU dependency on NVIDIA silicon marks a structural departure from the sovereignty model Fugaku represented. RIKEN's own framing is "Made with Japan" (deliberately international, not "Made in Japan"). Matsuoka called the NVIDIA partnership "a major strategic move" that will "enhance Japan's capabilities and encourage wider adoption of Japanese CPU technologies globally" in RIKEN's August 2025 announcement.

The sovereignty question is whether owning the CPU design, the system integration, and the operational footprint is sufficient, or whether GPU dependency on a U.S. vendor creates a structural vulnerability that the all-domestic A64FX template avoided. The Rapidus dimension complicates that question in ways the official announcement did not surface. Fujitsu is considering manufacturing MONAKA-X at Rapidus, Japan's domestic 2nm foundry in Chiyoda, Tokyo -- which would make both the CPU design and its fabrication domestic even as the GPU layer depends on NVIDIA silicon. Fujitsu Vice President and CTO Vivek Mahajan stated the commercial stakes plainly in Nikkei xTECH's April 2026 reporting: MONAKA-X "must be a commercial success. Otherwise, we cannot continue CPU development." FugakuNEXT is not only a science infrastructure decision -- it is the vehicle through which Fujitsu is attempting to make its next-generation CPU viable in AI data centers globally, with domestic fabrication as the economic security argument underneath.

Why FugakuNEXT's Design Choices Land Now

FugakuNEXT's basic-design completion arrives in a November 2025 TOP500 landscape where CPU+GPU hybrid architectures already dominate the top of the list. El Capitan at Lawrence Livermore, Frontier at Oak Ridge, and Aurora at Argonne (the three U.S. Department of Energy exascale systems) all chose hybrid designs. JUPITER Booster at Jülich, Europe's first exascale system, runs on NVIDIA Grace Hopper Superchips. Switzerland's Alps is also Grace Hopper-based. Aurora, built with Intel Xeon Max CPUs paired with Intel Data Center GPU Max accelerators, is the only top-three system that did not bet on the dominant GPU vendor, and it had documented commissioning difficulties. FugakuNEXT aligns with the architectural pattern that every other current exascale system except Aurora already chose.

The performance target RIKEN published is 100x application performance over Fugaku. That figure combines the approximately 6x FP64 hardware improvement with algorithmic and software gains from physics-informed neural networks, mixed-precision computing, and surrogate modeling. RIKEN's August 2025 announcement states the system will achieve "within the same approximately 40MW power constraint... up to a hundredfold overall increase in application performance." The 40-megawatt ceiling is the most underreported constraint in the program and the one practitioners designing next-generation procurements will find most significant. Delivering more than 2.6 exaflops FP64 in the same power budget Fugaku operates under today requires gains in every layer of the stack: silicon efficiency, memory subsystem bandwidth, interconnect topology, and software optimization.

Japan's Ministry of Education, Culture, Sports, Science and Technology approved approximately 4.2 billion yen for first-year development, with an indicative total project budget of approximately 110 billion yen (roughly $740-761 million USD at 2025 exchange rates). The system targets operational readiness around 2030, with RIKEN planning a two- to three-year overlap between current Fugaku and FugakuNEXT during system bring-up and commissioning. Existing Fugaku users will not face an abrupt transition.

FugakuNEXT Specifications (Estimated)

Specification

Value

Notes

FP64 Performance

>2.6 EFlop/s

Approximately 6x Fugaku's 442 PFlop/s HPL

FP8 Sparse Performance

~600 EFlop/s

"Zettascale" marketing figure; not comparable to FP64 baseline

Power Envelope

~40 MW

Same constraint as Fugaku

Node Architecture

2 MONAKA-X CPUs + 4 GPUs (estimated)

Based on NextPlatform analysis of RIKEN spec tables

Total Nodes

>3,400 (estimated)

Subject to revision in detailed design phase

Total GPU Sockets

~13,600 (estimated)

Derived from node count estimate

CPU

Fujitsu MONAKA-X

144 cores, 2nm, Arm-based with extended SIMD

Budget (Total)

~110 billion yen (~$740-761M USD)

Indicative total; FY2026 tranche: 4.2 billion yen

Target Operational Date

~2030

With 2-3 year Fugaku overlap during commissioning

What RIKEN's Proxy Machines Test Before FugakuNEXT Arrives

Two RIKEN proxy machines became operational in spring 2026, both serving as codesign platforms for FugakuNEXT. The first system deploys 1,600 NVIDIA Blackwell GB200 NVL4 GPUs on NVIDIA Quantum-X800 InfiniBand for AI-for-science workloads. The second deploys 540 NVIDIA Blackwell GB200 NVL4 GPUs on the same network for quantum computing research. NVIDIA's November 2025 announcement stated that the proxy machines would allow RIKEN to "create one of the world's leading unified platforms for AI, quantum and high-performance computing," according to Matsuoka. The systems are live. The question now is what workloads RIKEN actually runs first and whether the named science domains (life sciences, materials, climate, quantum algorithms) produce results or remain aspirational.

The GPU generation for FugakuNEXT itself is not yet named. The spring-2026 proxy machines use GB200 NVL4 Blackwell. NextPlatform's analysis of the RIKEN timeline points to a "Feynman Ultra" or successor generation for the 2029-2030 deployment window based on NVIDIA's published roadmap, but no commitment has been disclosed. The gap between the proxy-machine GPU generation and the production-system GPU generation is a standard procurement pattern in national flagship programs. The codesign software stack must be stable before the final silicon choice is locked.

On January 27, 2026, RIKEN, Argonne National Laboratory, Fujitsu, and NVIDIA signed a memorandum of understanding covering AI and HPC collaboration. The MoU names application porting and joint codesign as objectives. Its substance remains unproven until joint output appears. The key question is whether the collaboration produces shared software stack contributions (schedulers, compilers, middleware, application benchmarks) or remains a procurement-coordination agreement. Japan's MEXT is coordinating with the U.S. Department of Energy's HPC centers on an application performance suite called Benchpark to test supercomputers including FugakuNEXT. Benchpark addresses the right question (application performance on real workloads rather than synthetic benchmarks), but its methodology and workload list are not yet published. The 100x application performance target is currently not falsifiable without a defined baseline measurement and workload suite.

What FugakuNEXT Means for National HPC Procurement Strategy

Matsuoka's application-first philosophy predates FugakuNEXT. The FugakuNEXT announcement is not a retreat from that position. It is a continuation of it, applied to an AI-era architectural landscape where GPU acceleration dominates the training and inference workloads scientific users now demand. The TOP500 benchmark captures Linpack performance, a measure of dense linear algebra throughput. TOP500 now captures only 10-20 percent of global GPU cluster performance, a figure consistent with the growing share of AI training and inference workloads that do not register on Linpack-based rankings. Kathy Yelick, vice chancellor for research at UC Berkeley and keynote speaker at ISC 2024, drew the same boundary explicitly: Top500 is a valuable historical record, but "it is not the thing we should use to drive machine acquisitions -- that's where I think we start running into problems, and certainly, we should not use it to design machines."

For systems architects at national labs and research institutions benchmarking their procurements against Japan's flagship choice, the practitioner-level consequences are concrete. NVLink Fusion chiplets embedded in MONAKA-X CPUs become a published codesign template that Cray EX, Eviden BullSequana, and HPE/Dell competitors will be measured against. The two proxy machines now live at RIKEN define the software stack (including codesign tools, schedulers, and AI supercomputing convergence layers) that practitioners targeting FugakuNEXT compatibility will need to support by 2030. For research computing directors and procurement officers managing allocations on flagship-class systems, the tradeoff FugakuNEXT makes is the tradeoff their own procurements will face: whether sovereign HPC in 2030 means owning the silicon or owning the integration, the operational footprint, and the science workloads.

China's withdrawn-from-TOP500 systems remain the only published counter-example of an all-domestic flagship stack at the leading edge. Every other exascale system now operational or in procurement (El Capitan, Frontier, Aurora, JUPITER, Alps) made architectural choices that FugakuNEXT now aligns with. The question is whether FugakuNEXT's MONAKA-X + NVLink Fusion pattern becomes a reference architecture that other national procurements explicitly cite or reject. The next major national flagship RFP or award announcement, expected between 2026 and 2028, is where that question gets answered.

What Code Portability from Fugaku to FugakuNEXT Will Cost

RIKEN has not published guidance on code portability from A64FX Fugaku applications to FugakuNEXT's MONAKA-X + GPU stack. A64FX codes optimized for vector CPU execution do not automatically gain GPU acceleration. The porting burden on the existing Fugaku scientific user community is the hidden cost of the architectural transition and the practical test of whether the "used machine" philosophy extends to existing users or applies only to new AI workloads. Fujitsu MONAKA retains Arm architecture and extended SIMD support, which preserves some compatibility with A64FX-optimized code at the CPU level, but the GPU acceleration layer is new. Applications that rely on A64FX vector intrinsics will require refactoring to take advantage of NVIDIA GPUs. RIKEN's handling of this portability question (whether the center provides automated porting tools, manual application engineering support, or expects user teams to manage the transition independently) will define what "access" means in practice when the flagship architecture changes underneath an active user community.

Bottom Line

FugakuNEXT is not a retreat from sovereignty. It is a bet that Fujitsu MONAKA-X entering the global NVIDIA ecosystem is worth more strategically than an isolated all-domestic stack, and that owning the CPU design, the system integration, and the science workloads is sufficient even when the GPU layer depends on U.S. silicon. That bet is contestable. The architectural choice Japan made is the same choice Europe made with JUPITER and Switzerland made with Alps. It is the choice every exascale program except China's domestic stack and Intel's Aurora made. Whether that convergence represents the only viable path to exascale-class application performance in an AI-dominated workload environment, or whether it represents a structural dependency that sovereign programs will come to regret, is the question FugakuNEXT's 2030 operational performance will answer. The falsifiable test is not the TOP500 ranking. It is whether the 100x application performance target is met on the workloads RIKEN's scientists actually run, and whether those workloads include the existing Fugaku user community or leave them behind.

What to Watch

RIKEN proxy machine science results (Q2-Q3 2026). The two proxy systems (1,600 GB200 NVL4 GPUs for AI for science, 540 for quantum computing) are confirmed operational as of spring 2026. Watch for named science workloads beyond LLM training appearing in RIKEN publications. The workload list defines what "used machine" means in practice.

FugakuNEXT detailed design specifications (ISC 2026 or RIKEN announcement, June 2026 - March 2027). The basic design is complete. The detailed design is where MONAKA-X core count, NVLink Fusion bandwidth specifications, per-node GPU count, and memory subsystem architecture are fixed. These parameters are the benchmarks by which other national procurements will measure themselves against FugakuNEXT.

Benchpark application performance suite publication (ISC 2026 or SC 2026). MEXT is coordinating with DOE HPC centers on the Benchpark application performance framework as the evaluation methodology for FugakuNEXT. Watch for whether the benchmark suite's workload list and baseline measurements are published or remain internal. A published Benchpark suite makes the 100x application performance target falsifiable.

MEXT annual budget execution through FY2030. Budget execution is the hard test of political commitment. Watch each year's MEXT budget request and Diet appropriation against the indicative 110-billion-yen path. Any reduction or schedule stretch is a leading indicator that affects 2030 operational readiness.

Argonne-RIKEN codesign deliverables (ISC 2026 or SC 2026). The January 27, 2026 MoU's substance is unproven until joint output appears. Watch for whether the collaboration produces named software stack contributions, joint application porting results, or jointly authored architecture papers. The key question is whether the partnership produces shared codesign tools or remains a coordination agreement.

Next national flagship procurement cites or rejects FugakuNEXT pattern (2026-2028). FugakuNEXT's design choices will either become a template or stand as a Japan-specific path. Watch for whether the next major national HPC RFP or award announcement explicitly references the MONAKA-X + NVLink Fusion architecture as a model or articulates a different approach.

RIKEN code portability guidance for A64FX-to-FugakuNEXT migration (FY2026-FY2027). RIKEN has not published guidance on porting A64FX Fugaku applications to FugakuNEXT's MONAKA-X + GPU stack. A64FX codes optimized for vector CPU execution do not automatically gain GPU acceleration. Watch for any RIKEN user-community announcement, application readiness guide, or porting tool release. The handling of this portability question is the practical test of whether the "used machine" philosophy extends to the existing user base.

Exascale ComputingTop500NVIDIANational Labs & GovernmentAI-HPC Convergence
AI disclosure
AI-assisted research and first draft. This article has been verified by a human editor.
Related reading
HPC · NewsArgonne Turns a Plain-English Prompt Into 11,182 GCMC Runs on AuroraHPC · AnalysisSlingshot Held Performance Under AI Traffic Patterns That Collapsed InfiniBand by 5x on Production ExascaleHPC · NewsSweden's Mimer Buys an AI Services Layer, Not a New Flagship