Washington D.C. – Sami Kassab, a prominent figure in the crypto-AI space, is currently in Washington D.C., advocating for an immediate and comprehensive pause on all artificial intelligence research and development. Kassab announced his stance on social media, stating, "> Currently in DC advocating for an immediate pause to all AI research and development." This call for an outright cessation of AI R&D marks a significant escalation from previous appeals for more limited moratoriums.
Kassab's advocacy comes amidst ongoing debates about the rapid advancement and potential risks of AI technologies. His call for an immediate and total pause differs in scope from the "Pause Giant AI Experiments: An Open Letter" published by the Future of Life Institute in March 2023. That letter, signed by notable figures including Elon Musk and Steve Wozniak, sought a six-month moratorium on the training of AI systems more powerful than GPT-4, citing concerns over propaganda, job displacement, and societal control. Despite widespread attention, that proposed pause was not implemented, and AI development has continued apace.
Sami Kassab brings a unique perspective to the AI safety discussion, rooted in his extensive background at the intersection of cryptocurrency and artificial intelligence. He serves as the Managing Partner at Unsupervised Capital, a liquid token fund exclusively focused on the Bittensor ecosystem, a decentralized network for AI. Kassab is also recognized for pioneering the "DePIN" (Decentralized Physical Infrastructure Networks) concept during his tenure at Messari and has held key roles at OSS Capital and Crucible Labs, specializing in crypto-AI investments and research.
His deep involvement in advanced AI and decentralized technologies provides a distinctive context for his current advocacy. While the tech industry generally acknowledges the need for responsible AI development and regulation, Kassab's demand for a complete halt underscores a growing concern among some experts regarding the trajectory and potential unforeseen consequences of unchecked AI progress. The move highlights the increasing pressure on policymakers to address the complex challenges posed by rapidly evolving AI capabilities.