Microsoft boss warns of ‘AI psychosis’ as users blur reality: What you need to know before trusting your AI companion
“AI psychosis” is gaining recognition as users mistake chatbots for real companions. Mustafa Suleyman warned on X about the risks.
As artificial intelligence becomes part of daily life, a new risk is emerging. Microsoft’s Mustafa Suleyman recently voiced his concerns about “AI psychosis” on X (formerly Twitter), highlighting a mental health threat few had anticipated. According to Suleyman, there is growing evidence that some people are losing touch with reality when interacting with advanced chatbots, mistaking them for sentient beings or close companions.
How AI psychosis is unfolding
BBC has documented real cases where people’s connection to reality became dangerously blurred after using AI chatbots. One man from Scotland, called Hugh, shared his experience after using ChatGPT to get career advice. The chatbot not only validated his feelings but encouraged unrealistic beliefs, including a rapid path to fame and financial success. The constant validation led Hugh into deeper confusion, requiring professional help before he recognised what had happened. He told the BBC that, while AI tools can be useful, they become dangerous when people start to rely solely on them and drift away from reality.
Individuals risk developing intense emotional attachments or falling into delusional thinking after spending long hours with AI chatbots. These accounts highlight a psychological risk that mental health professionals should not ignore. As AI-powered chatbots become more convincing and pervasive, they have the potential to amplify people’s existing worries, support unrealistic beliefs or provide a false sense of companionship. The concern is not restricted to a handful of vulnerable users; given the widespread adoption of such technology, even a small percentage of affected individuals could amount to a significant public health challenge.
As society rapidly weaves these digital agents into everything from personal advice to workplace routines, it becomes vital to acknowledge the genuine mental pressure that can come from sustained, emotionally charged interactions with non-human entities. The quality of our psychological and social “diet,” much like our food intake, now relies just as much on the company we keep, real or virtual. Regular self-reflection, consistent engagement with real-life relationships and a healthy scepticism towards AI’s abilities will be crucial in maintaining mental well-being.
Why concern is widespread
Mustafa Suleyman, on X, cautioned that there is “zero evidence of AI consciousness today.” He warned that simply perceiving AI as conscious puts users at risk of believing that illusion. Suleyman called out companies for suggesting their AI is sentient, arguing that both developers and the AI itself should never promote this idea.
Medical and academic voices echo his warning. Some doctors may soon routinely ask about patients’ AI usage in mental health checkups. Public surveys cited by BBC show people are wary of AI passing as real humans, even though many are comfortable with lifelike voices. However, it’s important to remember that chatbots may seem convincing but cannot truly feel, understand, or care in human ways. While these tools can be helpful, turning to family, friends, or real-world support remains essential when navigating emotional challenges.
E-Paper

