
VAST Data's $30B mark is a bet on the middle layer of AI, not storage
VAST Data closed a $1B Series F at a $30B post-money, 3.3x its 2023 mark and roughly 1.7x Everpure's public cap. Here's what the math and the customer list actually signal.
Tracking the infrastructure behind foundation models, inference, and agents.

VAST Data closed a $1B Series F at a $30B post-money, 3.3x its 2023 mark and roughly 1.7x Everpure's public cap. Here's what the math and the customer list actually signal.

Samsung, SK hynix, and Micron converged on SOCAMM2 mass production within six weeks for NVIDIA's Vera Rubin. Korean suppliers now control both memory tiers.

HBM scarcity has moved beyond semiconductor supply into system planning. Accelerator availability, server bill-of-materials, cluster economics, and 2026 data center buildouts are all being rewritten around memory - not compute.

Broadcom will supply Anthropic with 3.5 GW of Google TPU capacity through 2031; ~23-35x the power of DOE's largest planned science supercomputer.

Two new expert-parallel efforts point to different futures for MoE systems: one built for heterogeneous fleets, the other folded into NVIDIA’s stack.
NVIDIA's $4B investment in Lumentum and Coherent signals indium phosphide scarcity and power equipment lead times are gating $2.52T AI spending forecast.

IBM's Arm collaboration introduces Telum II and Spyre for enterprise AI, but lacks benchmarks, named customers, and CUDA compatibility disclosure.

Nvidia projects $1T in Blackwell orders through 2027, but 72% of operators cite grid capacity as the primary constraint. Power now limits AI.

AI inference costs have fallen 1,000x yet agentic workloads still cost hundreds daily, as Anthropic blocking OpenClaw from subscriptions proves consumer pricing cannot absorb real infrastructure economics.

FlatAttention claims 4× speedup over FlashAttention-3 on unnamed tile-based accelerators. No code, no hardware vendor, no deployment path yet.

d-Matrix acquired GigaIO's data center business, gaining FabreX PCIe memory fabric and SuperNODE rack-scale technology to build a vertically integrated AI inference platform around its Corsair accelerator.