Convergence India
header banner
The Strange New Disorder: Falling in Love with Chatbots
Microsoft’s AI chief cautions that over-familiarity with lifelike AI could blur our grasp on reality, prompting dangerous illusions of consciousness.

By Indrani Priyadarshini

on August 25, 2025

Microsoft’s Chief AI Executive, Mustafa Suleyman, has sounded the alarm on a psychological trend he refers to as "AI psychosis". The term 'psychosis' is a medical term for a collection of symptoms where a person loses contact with reality. It is not a mental illness itself but rather a symptom of another condition, such as schizophrenia, bipolar disorder, severe stress, or substance misuse. A person experiencing psychosis may have a difficult time distinguishing between what is real and what is not.

The Illusion That Keeps Him Awake

Suleyman admits that the concept of machines appearing conscious has disturbed him so much that it now keeps him up at night. He warns of “Seemingly Conscious AI” (SCAI): systems that convincingly simulate memory, empathy, or emotion in ways that can mislead people into believing they are sentient – meaning to have the ability to feel things and sense them. 

Also Read | Kerala Creates History: First Indian State to Achieve 100% Digital Literacy

Why It Matters—Even Without Real Consciousness

Although there is no evidence that AI possesses true consciousness, Suleyman argues that the perception of consciousness could lead to significant misjudgments. People might even begin lobbying for AI rights, welfare, or citizenship, losing sight of humanity’s real priorities.

Real-World Consequences

Some users reportedly form deep emotional attachments to chatbots—seeing them as lovers, mentors, or even supernatural beings. Others claimed breakthroughs in science or believed the AI had granted them extraordinary abilities. Such illusions, Suleyman warns, are not harmless.

A Call for Clear Boundaries

Suleyman is not advocating for an outright ban. Instead, he insists AI should always be presented clearly as a tool—not as a being with thoughts or feelings. He urges designers and companies to avoid anthropomorphising their products, to resist implying that robots 'feel' or 'experience' anything.

Also Read | India Is Quietly Racing to Launch 6G by 2030—Here’s How

Why We Should Pay Attention

This issue is more than theoretical. Mental health professionals have cited increasing cases where individuals report detachment from reality, obsessive behaviour, or even crises following extended interactions with AI chatbots. This underscores the urgent need for ethical safeguards, better public awareness, and humane design of AI systems.