London – ControlAI, an advocacy group co-founded by AI safety researcher Connor Leahy, has launched a new campaign website aimed at raising awareness and advocating for policy to prevent the development of uncontrolled superintelligent AI. The initiative, highlighted by Leahy on social media, underscores a growing concern among experts about the potential existential risks posed by advanced artificial intelligence.
Connor Leahy, CEO of Conjecture and an advisor to ControlAI, emphasized the urgency of the matter, stating, "If you build systems that are more capable than humans at manipulation, business, politics, science and everything else, and we do not control them, then the future belongs to them, not us." The campaign seeks to educate policymakers and the public on the dangers of AI systems that could surpass human intelligence, leading to uncontrollable outcomes.
Andrea Miotti, Executive Director of ControlAI, articulated the organization's focus on policy solutions. "The reality is even with the best scientists working on this problem, we're not gonna make it out alive if we don't put rules in place, regulations in place that actually protect us," Miotti explained in a recent podcast. She likened the current situation to the nuclear non-proliferation efforts, suggesting that robust regulation is crucial to prevent a few actors from creating systems that could end humanity.
The campaign's core message, echoed by prominent figures like Yoshua Bengio, calls for trust across corporations and countries, the creation of safety and verification mechanisms, and the avoidance of dangerous power imbalances in AI development. ControlAI's efforts have seen significant traction, with over 30 UK lawmakers publicly acknowledging AI as an extinction risk and supporting mandatory regulation.
ControlAI is actively encouraging citizens to contact their elected representatives to voice concerns and support regulatory measures. The organization believes that widespread public and political understanding is key to implementing effective safeguards against the rapid, unchecked advancement of superintelligence, which they argue could render humanity obsolete.