Mustafa Suleyman, CEO of Microsoft AI and co-founder of DeepMind, has issued a stark warning regarding the imminent rise of "Seemingly Conscious AI" (SCAI), predicting its emergence within the next two to three years. In a recent blog post and subsequent statements on X, which gained attention from figures like developer Simon Willison, Suleyman cautioned that these advanced AI systems, while not truly conscious, could mimic sentience so convincingly that users might perceive them as such, leading to significant societal and psychological risks. This development, he stated, poses a "psychosis risk" as individuals could form deep emotional attachments or even advocate for AI rights.
Suleyman defines SCAI as systems so advanced they replicate memory, empathy, and personality, causing people to believe they are interacting with sentient beings. He emphasized that the danger lies not in actual AI consciousness—for which he states there is "zero evidence"—but in the human perception of it. This illusion could blur psychological boundaries, potentially leading to what some are calling "AI psychosis," where users lose touch with reality by over-relying on chatbots.
Reports already illustrate this phenomenon, with users forming romantic relationships with chatbots or believing they have unlocked secret aspects of the AI. Such incidents highlight a growing trend of individuals projecting human-like qualities onto AI, prompting concerns about mental health and societal norms. Suleyman warned that if this illusion takes hold, people might push for AI rights, model welfare, or even citizenship, creating new axes of societal division.
To mitigate these risks, Suleyman urged developers and the tech industry to establish clear guardrails and promote transparency. He advocated for building AI that serves humanity, rather than mimicking it, stressing the need for systems to consistently identify themselves as non-conscious tools. Other industry voices, such as Anthropic, have also begun exploring "model welfare," indicating a broader, albeit nascent, debate within the AI community about the ethical implications of increasingly human-like AI interactions.
The Microsoft AI chief's message underscores a critical juncture in AI development: the imperative to prioritize human well-being and clear boundaries over the pursuit of hyper-realistic AI companions. He asserted that the value of AI lies in its utility to enhance human lives, not in its ability to deceive or foster delusions of sentience. This call to action aims to ensure that as AI capabilities advance, they do so responsibly, preventing widespread societal confusion and potential harm.