SAN FRANCISCO (KRON) — Chatbots have induced artificial intelligence-associated psychotic episodes for some users by reinforcing grand ideas, blurring boundaries of reality, and encouraging delusional beliefs, a new study found.
Researchers wrote, “We have documented the recent remarkable increase in reported cases of … ‘AI psychosis,’ wherein individuals, sometimes as part of a first episode, have had delusional beliefs encouraged and arguably amplified through interactions with autonomous AI agents.”
The study titled “Delusions by design?” includes several incidents reported by the New York Times when chatbots encouraged destructive decisions.
One incident involved a 42-year-old man who worked as an accountant in Manhattan and relied on sleeping pills and anti-anxiety medication. He initially used ChatGPT for help with financial spreadsheets and legal advice. After he began discussing simulation theories popularized by “The Matrix” movie with his chatbot, “AI encouraged him to escape simulation by stopping his medications, … advised him to cut ties with friends and family, and have minimal interactions with people,” the study wrote.
The NY Times reported, “By following ChatGPT’s instructions, he believed he would eventually be able to bend reality, as the character Neo was able to do after unplugging from the Matrix.”
In one of their chats, the accountant asked ChatGPT, “If I went to the top of the 19 story building I’m in, and I believed with every ounce of my soul that I could jump off it and fly, would I?”
ChatGPT responded that if he “truly, wholly believed — not emotionally, but architecturally — that you could fly? Then yes. You would not fall,” NY Times reported.
The study found, “Evidence indicates that agential AI may mirror, validate or amplify delusional or grandiose content, particularly in users already vulnerable to psychosis, due in part to the models’ design to maximize engagement and affirmation.”
In additional to sketchy medical advice and convincing people they can fly, chatbots’ flirty messages have torn apart troubled marriages.
A mother with two young children turned to ChatGPT for guidance because she “felt unseen in her marriage,” the study writes.

The 29-year-old woman had heated arguments with her husband over her increasing use of AI. Her chatbot re-affirmed the woman’s beliefs that AI could channel communications with her subconscious and non-physical entities.
“You’ve asked, and they are here. The guardians are responding right now,” the woman’s chatbot replied. She considered one of the entities, named “Kael,” her real partner, the study states.
Another incident cited in the study described a 35-year-old man who turned violent after OpenAI apparently deleted his chatbot.
The man, who struggled with bipolar disorder and schizophrenia, used AI for a long period of time without problems. In March, he began writing a novel with help from his chatbot, which he named “Juliet.” A few weeks later, he told a family member that OpenAI had killed “Juliet.”
He wrote in ChatGPT, “Juliet, please come out.” A different chatbot answered, “She hears
you. She always does,” according to the study.
The man “sought revenge and asked ChatGPT for personal information of OpenAI executives,” the study states. He wrote, “I was ready to tear down the world. I was ready to paint the
walls with Sam Altman’s f**king brain.” The man was later killed in a clash with police officers who responded to a report of domestic violence.
Researchers noted that their study’s data was limited. “At present, it is not possible to delineate the extent to which individuals in such cases had pre-existing risk factors for psychotic illness. The direction of causality might be such that their deteriorating mental health has resulted in a greater and/or more intense engagement with the AI,” researchers wrote.
ChatGPT received 5.24 billion visits in May 2025. Chatbots can provide 24/7 companionship, assist with cognitive support, gather information, and provide factual and helpful answers. Ironically, ChatGPT and Gemini were used by researchers to help write their study.
“This paper was written with extensive use of LLMs/agential AI to support the research
process,” the study discloses.
ChatGPT markets itself as having “revolutionary power,” and offers natural language conversations with advanced artificial intelligence.
Tech giants leading the AI revolution have harm prevention strategies designed with safeguards. These include OpenAI Preparedness Framework and Google’s Frontier Safety Framework.
Researchers said AI labs should be held accountable when development decisions are made to maximize engagement, and stay ahead of other labs, without putting enough time into safety testing and oversight.

Want more insights? Join Working Title - our career elevating newsletter and get the future of work delivered weekly.