HPCOpinion

Supercomputing Is the Category

HPC built this community. It's time to reclaim the word that defines what it has become.

When the U.S. Department of Energy wanted to describe the most ambitious computing initiative in a generation - the Genesis Mission - it didn't say "HPC systems." It said "supercomputers." Every press release, every executive statement: "an integrated platform connecting the world's leading supercomputers, experimental facilities, AI systems, and unique scientific data sets." Michael Kratsios, speaking at CES 2026, talked about "bringing together the unmatched power of national labs, supercomputers, and innovative minds."

When the Bureau of Industry and Security restricted exports of advanced chips to China, the Federal Register category was "Advanced Computing/Supercomputing." Not "HPC end-uses." When congresspeople argue for investment in computing, "we need a supercomputer" lands. "We need HPC resources" gets blank stares.

The field's most powerful institutions, its government sponsors, and the general public all use "supercomputing" as the category name. So why does the industry itself insist on "HPC"?

This isn't an attack on HPC. HPC built this field. But the field has grown, and a discipline name can no longer do the work of a category name. Supercomputing is the category. HPC is a founding discipline within it and deserves recognition for what the field has become.

Three Decades of "Supercomputing" (1964–1994)

Seymour Cray didn't build "high performance computers." He built supercomputers. The CDC 6600 in 1964 was so dominant that IBM's Thomas Watson Jr. reportedly asked how "only 34 people including the janitor" could build a machine faster than anything IBM had. The Cray-1 in 1976 became the image of supercomputing for a generation. The Cray-2 broke the GFLOPS barrier in 1985. For thirty years, nobody debated terminology. "Supercomputing" was the category and the identity.

The word worked because it was intuitive and instantly understood by everyone who encountered it. Scientists, policymakers, journalists, students. It conveyed ambition and national purpose. Rob Knoth of Cadence put it well: "The word 'supercomputer' is a more important word for science communication than it is for science. It carries power with it... Supercomputers are what help inspire the next generation of engineers."

The institutions founded during this era understood this instinctively. The National Center for Supercomputing Applications. The Barcelona Supercomputing Center. The Leibniz Supercomputing Centre. The conference was called Supercomputing. The TOP500 ranked supercomputers. Nobody called it the "HPC list." The word carried weight because it named the thing itself. Not a technique, not a methodology, but the ambition of solving humanity's hardest problems with the most powerful computing available.

How a Discipline Name Replaced a Category Name (1994–2005)

In 1994, Thomas Sterling and Donald Becker at NASA Goddard built the first Beowulf cluster: 16 commodity Linux PCs networked together for parallel processing. It was a watershed. You didn't need a multi-million-dollar Cray to do serious computation. Commodity hardware plus clever software could get you there for certain workloads. By the late 1990s, commodity clusters were dominating the TOP500 list, pushing out proprietary vector machines.

The architecture shifted from monolithic vector processors to massively parallel processing and then to clusters of commodity hardware. "Supercomputer" implied a single, exotic machine. "HPC" described the new reality: high performance achieved through networked commodity hardware. Costs dropped, more organizations could participate, vendors wanted enterprise sales. "HPC" sounded like a market. "Supercomputing" sounded like a club.

The conferences followed. SC97 was the first year without "Supercomputing" in the formal title, rebranded to "SC '97: High Performance Networking and Computing." The organizers explained: "This change reflects our growing attention to networking, distributed computing, data-intensive applications, and other emerging technologies that push the frontiers of communications and computing." By 2006, the full name was "International Conference for High Performance Computing, Networking, Storage and Analysis." ISC followed in 2015, rebranding from the "International Supercomputing Conference" to "ISC High Performance."

The shift made sense. When the discipline of parallel simulation on commodity clusters was the whole category, a discipline name could do the work of a category name. The problem is that it stopped working when the category grew new disciplines.

Norris Parker Smith, writing in HPCwire in February 1997, saw it happening. "Like Lewis Carroll's Cheshire Cat," he wrote, "'supercomputing' has faded steadily away until only the smile, nose, and whiskers remain." His warning: "An enormous range of ordinary people had some idea, however vague, what 'supercomputing' meant. No-caf, lo-cal alternatives like 'SC' and 'HPC' lack this authority."

Smith identified the hierarchy problem at the moment it happened. A category name was being replaced by a discipline name. The shift was right for its era. But some would argue the field never really overcame the confusion that came with the replacement, and the tension he named has only grown as the field added disciplines under the supercomputing umbrella.

The Tent Got Bigger. The Label Didn't.

AI training at scale walked in the door. Then quantum computing. Then scientific foundation models. Then large-scale data analytics. None of these are HPC. All of them are supercomputing.

Consider what the field actually contains now. HPC handles classical simulation: CFD, molecular dynamics, weather modeling, nuclear stockpile stewardship. AI/ML at scale covers LLM training, scientific foundation models, inference at industrial scale. Quantum computing addresses electron-structure problems, combinatorial optimization, cryptography. Large-scale data analytics runs genomics pipelines, particle physics data, intelligence workloads. And hybrid AI+simulation work, from surrogate models to AI-guided mesh refinement and digital twins, is growing fast.

All supercomputing. Only some of it is HPC.

The field's most senior voices have said this explicitly. Jack Dongarra, Daniel Reed, and Dennis Gannon published "Ride the Wave, Build the Future" in February 2026. Their first maxim: "HPC is now synonymous with integrated numerical modeling and generative AI." They describe AI and simulation as "peer processes" within scientific computing. Their verdict: "The old model of HPC as a dominant, self-directed driver of advanced hardware and software has ended."

That's the hierarchy argument, made by the people who built the field. HPC is a discipline within a larger enterprise.

The implications go deeper than scope. The "P" in HPC has been anchored to one thing for over three decades: LINPACK FLOPS. Peak floating-point throughput. That's how the TOP500 has ranked supercomputers since 1993. That's what "high performance" has meant in practice.

In the same paper, Dongarra, Reed, and Gannon propose replacing that framework entirely. Their second maxim: "Energy and data movement, not floating point operations, are the scarce resources." They argue that peak FLOPS are "no longer sufficient" and propose "joules per trusted solution" as the primary measure of system value: the total energy cost of producing a scientifically meaningful, validated answer. "Performance metrics that ignore power and communication costs," they write, "encourage architectures that look impressive on paper but are increasingly impractical to operate at scale."

When the creators of HPC's defining benchmark say that benchmark is insufficient, the word "performance" in HPC no longer means what it used to. The field isn't just outgrowing HPC's scope. It's outgrowing the definition of its core word. "Supercomputing" describes ambition and outcomes rather than a specific measure of speed. It absorbs this redefinition naturally. "HPC" cannot, because the metric shift hollows out the very word it's built on.

Simon Rance of Keysight Technologies made the point casually: "There's the evolution of supercomputing, in general. But then you have quantum, as well, and quantum is starting to really gain momentum." He uses "supercomputing" as the category that contains both classical HPC evolution and quantum computing. Unselfconsciously, because the hierarchy is obvious when you stop overthinking the labels.

The Trillion Parameter Consortium makes the same point through its structure. Founded by institutions that carry "Supercomputing" in their names — NCSA, BSC, ArgonneTPC treats AI and HPC as sibling disciplines: "TPC brings together researchers working across the fields of Artificial Intelligence and High Performance Computing." It grew from 150 to over 1,400 participants in eighteen months. Its work, training scientific foundation models on exascale platforms, is neither pure AI nor pure HPC. It's supercomputing.

AI investment is also pouring resources into exactly the hardware that all supercomputing disciplines need: faster interconnects, higher bandwidth memory, advanced packaging, better thermal solutions. As Paul Hylander, chief architect at Eliyan, noted, the massive expenditures going toward AI computing have created "a renewed emphasis on higher-bandwidth memories, higher-bandwidth networking, and better thermal solutions." The technology roadmap is converging. That's what happens when multiple disciplines share a category. They lift each other.

The Field Already Knows This

The evidence isn't scattered. It's everywhere, and it points in one direction.

In October 2025, insideHPC merged its sister publication insideAI News back into the flagship. Publisher Stephanie Correra explained: "Increasingly, those who want to stay on top of AI need to stay on top of developments in HPC-class technologies, and likewise, the HPC community is increasingly interested in how AI is helping drive technical computing forward. The two are joined at the hip." The publication described its mission as covering "all the elements that make supercomputing, broadly defined, happen and evolve."

An HPC publication, with "HPC" right there in the name, defining its scope as "supercomputing, broadly defined." The hierarchy is right there.

HPCwire tells the same story from a different angle. The publication started as Supercomputing Review, serving as the conference guide for the first Supercomputing Conference in 1988. It was renamed to HPCwire as the field's terminology shifted. Its parent, Tabor Communications, expanded to launch AIwire, BigDatawire, and QCwire — separate outlets tracking disciplines that had grown big enough for dedicated coverage.

Then in October 2025, Tabor consolidated all four publications under hpcwire.com. The separate sites collapsed into one. The editorial logic: these disciplines had converged enough that separate outlets no longer made sense. Tabor's trajectory mirrors the field's. HPC started as the whole story, then AI, data analytics, and quantum grew into peer disciplines, and now they've reconverged — under a domain that still carries the HPC name but covers the full supercomputing category.

SC25 programmed AI, Machine Learning & Deep Learning as a major session track. Quantum & Post-Moore Computing had its own dedicated track. A plenary session asked "Why Should I Care About Quantum Computing?" The 2026 theme, "HPC Unites," is about uniting communities that extend across the full supercomputing category. And the conference website? Still supercomputing.org. Twenty-nine years after the rebrand. The URL tells the truth.

Deloitte frames the hierarchy explicitly: "Supercomputing is the use of a computer to solve a problem that requires a lot of computational power... high-performance computing is the integration of all the knowledge and tools necessary to construct parallel supercomputers." HPC as the engineering discipline that builds supercomputers — a subset of the larger supercomputing category. Their use cases span AI/ML, cybersecurity, financial modeling, healthcare, and space exploration, all under the "supercomputing" umbrella.

The naming tensions show up globally. The EuroHPC Joint Undertaking uses "HPC" in its name but funds supercomputers: LUMI, Leonardo, MareNostrum. Japan's Fugaku program uses "supercomputer." China frames its ambitions around supercomputers — Tianhe, Sunway. The discipline name is in the institutional branding. The category name is what they actually build and fund.

Steven Woo of Rambus asked the convergence question directly: "Does there need to be a separate class of machines that only serve the supercomputing market? And, at the same time, does AI become so fundamental that these two are merging together?" The answer, increasingly, is that they were never truly separate. They're different disciplines within the same category.

Naming the Hierarchy

Let me state it plainly. Supercomputing is the category. HPC is a discipline within it.

So is AI at scale. So is quantum computing. So is large-scale data analytics. The category holds them all. It always has.

This isn't a demotion of HPC. It's a promotion of the whole field. HPC is the founding discipline — the original pillar of the supercomputing cathedral. Naming the cathedral doesn't diminish the pillar. It contextualizes it.

Andrew Jones, VP of HPC Consulting at NAG and a regular voice in the TOP500 community, draws a useful distinction. HPC describes a "super" step change in capability for that specific user — moving from a desktop to a GPU cluster, say. Supercomputing describes a step change recognizable for most users in the discipline — the field-level concept. One is the personal experience. The other is the category. One sits inside the other.

The hierarchy already exists in practice. Government policy uses "supercomputing" as the category. Publications define their scope as "supercomputing, broadly defined." Conferences program the full category under titles that still say "HPC." Institutions chose "supercomputing" for their names. The field's most senior researchers describe HPC and AI as "peer processes" within a larger enterprise.

The only thing missing is the explicit naming.

What This Means

For vendors: "supercomputing" communicates the full scope of what you sell to every audience that matters — investors, customers, policymakers, press. "HPC" communicates to practitioners who already know. Both have value. But when you need a category name that opens doors, "supercomputing" is the one that works in every room.

For advocates: when you go to Congress, say "supercomputer." They already do. When the Genesis Mission describes its vision, when the White House issues executive orders, when the BIS writes regulations, the word is always "supercomputer." The policy world chose the category name. The industry should meet them there.

For conferences: SC's 2026 theme is "HPC Unites." It works. But "Supercomputing Unites" works bigger, because it names what the conference actually unites. AI researchers, quantum physicists, data scientists, simulation engineers, and the infrastructure builders who serve them all. That's not an HPC tent. That's a supercomputing tent. Maybe it’s time to reconsider the framing for SC27?

For the field: using "supercomputing" as the category name gives HPC its proper place. Not diminished, but contextualized as the founding discipline of a field that now includes multiple ways of attacking the hardest problems. HPC built the cathedral. The cathedral now has more than one pillar. The name on the door should describe the whole building.

Whatever comes next — quantum-classical hybrids, neuromorphic architectures, things we haven't imagined — "supercomputing" will still be the right word. It describes the ambition, not the architecture. It names the mission, not the method. It has always been the category name.

It's time we used it that way.




Matt Walters is CEO of OmniScale Media, a strategic communications firm serving the advanced computing ecosystem. He has covered supercomputing and HPC for over 15 years, including tenure at Tabor Communications, the parent company of HPCwire.

Sources