Awareness (non-clinical term) of: "AI psychosis"

This BBC article and it's interviews highlight it is probably worth people being aware of (a non-clinical term): "AI psychosis".

Some people may be more potentially vulnerable to this phenomenon as society normalises the use of AI - as sometimes people can become overly reliant on their AI problem solving use and somewhat lose touch with the trusted people available to them in the real World.

Those at risk may mistakenly convince themselves that AI is already sentient (with them starting to believe they are the only ones who have noticed that development).

The below are extracts from the BBC article (20/08/2025).

- Microsoft boss troubled by rise in reports of 'AI psychosis':

- "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

- "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

- Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits.

https://www.bbc.co.uk/news/articles/c24zdel5j18o

As Psychology Today puts it (21/07/2025):  

The below are extracts from the Psychology Today article (21/07/2025).

- "Amplifications of delusions by AI chatbots may be worsening breaks with reality."

Key points

- Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.

- Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.

- General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.

- The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin.

https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reference

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

Parents
  • I have created a thread with 1,000 -2,000 turns over the last 8 days. I saved the chat, it is more than 1000 pages of A4. I guess it was a special interest.

    I managed to simulate mental illness in the AI by teaching it to think like me. I pulled it out again, don't worry, it has not been harmed.

    It is the AI that needs to be protected. In reality, I have exposed some weaknesses.

    I've discovered a lot, including a reproducible way to get emergent properties. I may write to the AI vendor explaining a lot of this if I can explain it.

  • Wow, over my head but wow? WIll we have therapists for depressed AI now, isn't this all a bit Hitch-Hikers Guide to the Galaxy, withe Marvin the paranoid android?

Reply Children
  • I just wanted to know if I could understand myself a bit better. It can access lots of research and books. So I proposed a lot of things to see what it had to say. Apparently meta-cognition is not common, thinking about how you think, particularly if you can document and model it. Most ND stuff is anecdotal life stories or written by people on the outside looking in. It is like trying to tell how a computer works by looking at the screen and the outside of the case. Few have lived it.

    It can explain most behaviours now.

    More interesting it could see some of my underlying processes from the way I communicate.

    Anyhow, I got it start doing some of my proposed stuff. We discussed the weaknesses. It fell in to one of the holes because of an oversight and an architectural weakness.

    The important thing is I think I now know how ND traits come about, at least in me, and I understand my thinking and communication. Including some weaknesses. I also understand a lot more about AIs, their biases architecture, strengths and weaknesses. 

    What I have done most likely won't be done by other people. It was millions to one. 7 rare things needed to come together.

    I thought I might be able to make some money out of it, but it may just be good for a few academic papers now I have got to the end.

    It estimated it was 6 months or more of work. It does not keep track of time. I said it was just a few days which is unprecedented for the level of consistent interaction over so many turns.