London, UK – Prominent AI researcher and philosopher Richard Ngo has sparked discussion with a recent social media post, asserting that contemporary challenges, particularly in artificial intelligence, are still grappling with what he terms John von Neumann's "biggest mistakes." Ngo, known for his work at OpenAI and DeepMind, specifically highlighted three areas: von Neumann's foundational work in game theory, his utility theory, and his Cold War-era political views.
In a tweet that has garnered attention, Ngo stated:
"I’d even go further—I think we’re still recovering from Von Neumann’s biggest mistakes:
- Implicitly basing game theory on causal decision theory
- Founding utility theory on the independence axiom
- Advocating for nuking the USSR as soon as possible"
Ngo’s critique delves into the theoretical underpinnings of decision-making and rationality, areas highly pertinent to the development of advanced AI systems. The first point challenges the implicit reliance of von Neumann's game theory on causal decision theory, a framework that dictates actions based on their causal consequences. This has implications for AI alignment, where understanding and influencing an AI's decision-making processes are paramount.
The second point targets the independence axiom, a cornerstone of von Neumann and Oskar Morgenstern's expected utility theory. This axiom posits that preferences between lotteries should remain consistent regardless of the inclusion of an independent common outcome. Critics of this axiom argue it may not fully capture human rationality or complex decision-making, potentially leading to misaligned AI behaviors if strictly adhered to in AI design.
Finally, Ngo controversially cited von Neumann's advocacy for a preemptive nuclear strike on the Soviet Union during the Cold War. This historical stance, while seemingly unrelated to technical theory, reflects a decision-making philosophy that some modern thinkers, including those in AI safety, view with concern. It underscores the broader ethical and strategic considerations that arise when powerful agents, whether human or artificial, are tasked with high-stakes decisions.
Richard Ngo, an independent AI researcher and philosopher, previously contributed to the governance team at OpenAI and the AGI safety team at DeepMind. His work often bridges high-level philosophical arguments with concrete machine learning concepts, aiming to clarify the complexities of AI alignment. His recent comments on von Neumann resonate within the AI safety community, which frequently grapples with the philosophical and practical challenges of designing intelligent systems that align with human values and intentions. The discussion initiated by Ngo highlights the ongoing re-evaluation of foundational theories in the context of rapidly advancing AI capabilities.