Gill Verdon, founder of Extropic and a former quantum computing engineer at Google, recently highlighted the escalating energy demands of artificial intelligence, stating that his past warnings to venture capitalists about the "nearly infinite" market pull for AI and impending power constraints are now being validated. "3 years ago I would warn VCs that the market pull for AI is nearly infinite and we would run out of power. They would laugh. No one is laughing anymore. The need for more power-efficient compute has now become abundantly clear," Verdon announced on social media. This statement underscores a growing concern within the tech industry regarding the sustainability and infrastructure requirements of advanced AI.
Verdon's company, Extropic, is actively developing thermodynamic computing solutions, a novel approach to AI hardware designed to significantly improve energy efficiency. His work aims to create a processor more energy-efficient than the human brain, addressing the critical need for sustainable compute power as AI models continue to scale. This innovative direction seeks to embed AI algorithms directly into the physics of computing, pushing the limits of density, spatial efficiency, and speed.
The concerns raised by Verdon are echoed by recent industry projections. The International Energy Agency (IEA) forecasts that global electricity demand from data centers, heavily influenced by AI, is set to more than double by 2030, reaching approximately 945 terawatt-hours (TWh). This surge, which could account for nearly half of all data center electricity consumption worldwide, places significant strain on existing power grids and raises environmental considerations. In the United States alone, data center electricity usage is projected to increase to between 325 and 580 TWh by 2028, representing 6.7% to 12% of the nation's total electricity generation.
In response to these escalating demands, major technology companies are intensifying their efforts to develop more energy-efficient AI hardware. Nvidia, a leading GPU manufacturer, claims its new 'superchip' can deliver a 30 times performance improvement for generative AI services while using 25 times less energy. Similarly, Intel's Gaudi 3 AI chip boasts superior power efficiency over competitors. The industry-wide push for innovations like thermodynamic computing, analog in-memory computing, and advanced chip designs reflects a collective recognition that the future of AI hinges on overcoming these computational and energy hurdles.