
Abilene, Texas – The Stargate project, a monumental AI infrastructure initiative led by OpenAI, Oracle, and SoftBank, is set to deploy an unprecedented scale of computing power at its flagship Abilene, Texas, facility. This development positions Stargate to significantly surpass existing top-tier supercomputers, as highlighted by a recent social media post from "Lisan al Gaib" which characterized a leading European system as a "joke" in comparison. The Abilene site is projected to achieve up to 16 zettaFLOPS of peak AI performance, driven by hundreds of thousands of next-generation Nvidia superchips.\n\nThe Abilene data center, a cornerstone of the $500 billion Stargate venture, is slated to house over 450,000 Nvidia GB200 Grace Blackwell Superchips. These advanced superchips, each integrating a Grace CPU with two B200 GPUs, represent a substantial leap in AI processing capabilities. Oracle, a key partner, confirmed that its "Zettascale10" platform, underpinning the Abilene site, will deliver the staggering 16 zettaFLOPS, positioning it as one of the world's most powerful AI supercomputers.\n\nIn stark contrast, Europe's JUPITER Booster supercomputer, located at Forschungszentrum Jülich in Germany, utilizes approximately 24,000 NVIDIA GH200 Grace Hopper Superchips. JUPITER, operational since June 2025 and recognized as Europe's fastest exascale system, delivers 1 exaFLOP/s in FP64 precision and up to 80 exaFLOP/s for AI tasks in FP8 precision. The tweet directly referenced this, stating, > "the JUPITER Booster has 24k GH200 GPUs," and claimed, > "the Abilene Stargate will have 20x as many next-gen GPUs," a figure closely aligned with the reported 450,000 GB200 Superchips.\n\nThe performance metrics underscore a new era in AI computing. While JUPITER achieves 1 exaFLOP (one quintillion floating-point operations per second) in FP64, the Abilene Stargate is expected to exceed "8 ExaFlops FP64" and reach "like 2 ZettaFlops FP8," as noted in the tweet. A zettaFLOP represents a thousand exaFLOPS, illustrating the immense scale of Stargate's projected capabilities, particularly in lower-precision AI computations like FP8 and FP4, which are crucial for training large language models.\n\nThe deployment of such massive AI infrastructure reflects the escalating global demand for computational power to develop and train increasingly complex artificial intelligence models. This strategic investment by OpenAI, Oracle, and SoftBank aims to secure leadership in the rapidly evolving AI landscape, with the Abilene facility serving as a critical hub for future AI breakthroughs.