Prominent scientist Stephen Wolfram has articulated a compelling argument against the prevailing fears of artificial intelligence (AI) leading to humanity's demise, drawing parallels between advanced AI systems and the inherent complexities of the natural world. His perspective offers a counter-narrative to common AI doom scenarios, emphasizing humanity's historical ability to coexist with and leverage systems far more complex than itself.
Wolfram's argument centers on the concept of "computational irreducibility," a principle he developed that suggests the behavior of certain complex systems cannot be predicted more efficiently than by running the system itself. This implies that even superintelligent AI, while capable of advanced computation, will encounter inherent limits to predictability and control. According to Wolfram, this is not a novel challenge for humanity.
"what does it mean for us if AI becomes smarter than humans, if we are no longer the apex intelligence? if we live in a world where there are lots of things taking place that are smarter than we are -- in some definition of smartness. at one point you realize the natural world is already an example of this. the natural world is full of computations that go far beyond what our brains are capable of, and yet we find a way to coexist with it contently."
He further elaborates on this coexistence, stating, "it doesn’t matter that it rains, because we build houses that shelter us. it doesn’t matter we can’t go to the bottom of the ocean, because we build special technology that lets us go there. these are the pockets of computational reducibility that allow us to find shortcuts to live." This analogy suggests that humanity can similarly navigate and utilize advanced AI by identifying "pockets of computational reducibility" – areas where predictable and useful outcomes can be extracted despite overall unpredictability.
This viewpoint contrasts sharply with the concerns raised by AI safety advocates like Eliezer Yudkowsky, who warn of existential risks from unaligned superintelligence. Yudkowsky's "rocket alignment problem" analogy, for instance, highlights the difficulty of ensuring AI goals remain aligned with human values, suggesting that even minor misalignments in powerful systems could lead to catastrophic outcomes. However, Wolfram maintains that the inherent unpredictability of complex systems, whether natural or artificial, means that absolute control or perfect alignment is an unrealistic expectation.
Wolfram believes that while AI will automate many tasks, humans will retain the crucial role of defining purposes and goals. He notes that AI systems, left to their own devices, might explore paths irrelevant or even detrimental to human values. Despite the challenges, Wolfram remains optimistic about humanity's future, asserting that our species has consistently adapted to and thrived amidst complex, unpredictable environments, and will continue to do so in an increasingly AI-driven world.