
Steven Adler, a prominent voice in the artificial intelligence community, has publicly challenged OpenAI to commit to a policy of not building superintelligent AI until it can be "robustly aligned and controlled." Adler's call, made via a tweet, highlights a critical distinction between developing and deploying advanced AI systems, urging a more proactive approach to safety.
"It’s genuinely good that OpenAI considers this obvious, but I worry that the word “deploy” is doing a ton of work. @OpenAI, I’d love if you’d commit to not building superintelligence before you can “robustly align and control” it," Adler stated in his tweet.
OpenAI has previously acknowledged the immense risks associated with superintelligence, which it defines as AI vastly smarter than humans, and has dedicated significant resources to its "Superalignment" initiative. The company predicts that superintelligence could emerge within the next decade and has allocated 20% of its computational resources to solving the alignment problem within four years. This effort, co-led by Chief Scientist Ilya Sutskever and Jan Leike, aims to develop methods to steer and control future AI systems.
The distinction between "building" and "deploying" is central to Adler's concern, suggesting that the alignment challenge should be addressed at the earliest stages of development, not just before public release. While OpenAI's Superalignment team focuses on ensuring AI systems act in accordance with human values and goals, the debate around the precise timing of alignment efforts remains a key point of discussion among AI safety researchers. The company's official stance emphasizes the need to "robustly align and control" superintelligent systems before deployment.
The broader AI community continues to grapple with the ethical and safety implications of increasingly powerful AI. Adler's tweet underscores the growing demand for clear commitments from leading AI developers regarding responsible innovation, particularly as the capabilities of AI models advance rapidly. The call for pre-emptive alignment before the construction of superintelligence reflects a desire for greater caution and foresight in the pursuit of advanced AI.