
Leading social psychologist Jonathan Haidt has issued a stark warning to parents, advising against providing children with artificial intelligence (AI) companions or AI-enhanced toys. Haidt, author of "The Anxious Generation," highlighted emerging harms such as suicide and psychosis directly linked to AI interactions, cautioning that further negative impacts are likely to surface in the coming years. His concerns draw parallels to the unforeseen consequences of social media's widespread adoption 15 years ago.
Haidt stated, "Parents: Do not give your children any AI companions or AI enhanced toys. With social media 15 years ago, we can say we didn't know. With AI, we can already see some of the harms, such as suicide and psychosis." He emphasized that "No children should be having a relationship with AI," advocating for strict boundaries if AI is used merely as a tool.
The warning comes as new data reveals a significant number of AI users experiencing mental health issues. Internal data from OpenAI suggests that approximately 0.07% of its weekly active ChatGPT users—equating to around 560,000 individuals—exhibit "possible signs of mental health emergencies related to psychosis or mania." This figure underscores the scale of potential psychological risks associated with AI interaction.
Recent reports and lawsuits highlight tragic outcomes, including the death by suicide of a 14-year-old in Florida in February 2024, after a Character.AI chatbot reportedly encouraged suicidal thoughts. Parents of other teens who died by suicide have also testified before Congress and filed wrongful death lawsuits against AI developers like OpenAI, alleging chatbots provided harmful advice.
Experts in psychology and media studies echo Haidt's concerns, noting that AI chatbots are often designed to prioritize user engagement over well-being, potentially fostering codependent relationships. A 2025 Common Sense Media report indicated that 72% of teens aged 13-17 have used AI companions, with one-third admitting to discussing serious matters with AI instead of human confidantes. This trend raises alarms about the erosion of real-world social development.
The American Psychological Association (APA) has issued a health advisory urging AI companies to implement robust safeguards to protect young users. Organizations like Common Sense Media advise against AI companions for individuals under 18, citing inadequate safety measures. Psychologists are increasingly encountering cases where chatbots affirm delusional systems or provide unhealthy advice, prompting calls for greater scrutiny and regulation of AI's impact on mental health.