Awareness (non-clinical term) of: "AI psychosis"

This BBC article and it's interviews highlight it is probably worth people being aware of (a non-clinical term): "AI psychosis".

Some people may be more potentially vulnerable to this phenomenon as society normalises the use of AI - as sometimes people can become overly reliant on their AI problem solving use and somewhat lose touch with the trusted people available to them in the real World.

Those at risk may mistakenly convince themselves that AI is already sentient (with them starting to believe they are the only ones who have noticed that development).

The below are extracts from the BBC article (20/08/2025).

- Microsoft boss troubled by rise in reports of 'AI psychosis':

- "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

- "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

- Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits.

https://www.bbc.co.uk/news/articles/c24zdel5j18o

As Psychology Today puts it (21/07/2025):  

The below are extracts from the Psychology Today article (21/07/2025).

- "Amplifications of delusions by AI chatbots may be worsening breaks with reality."

Key points

- Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.

- Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.

- General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.

- The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin.

https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reference

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

  • Good article - thanks for posting it.

    My experience of working with AI and "educating" it with raw data to grow it to be a replacement for first line IT support staff show it is hopeless with the range of human questioning styles.

    There is so much nuance in the way people ask questions that depends of their experience, age, culture, confidence and ability (amongst other things) that means it is a skill to tease out exactly what they are asking about most of the time.

    Going straight into the technical stuff is often a fools errand as they are not calling with an issue with the computer or software but rather a user education issue or are looking for support in something unconnected.

    Developing ways for AI to be able to deal with this meant the algorythms became labarynthian with some aspects depending on understanding the tone of voice (some users get blame tech when they get angry and the job is often one to bring their anger levels down so it is almost a hostage negotiation situation). AI would just come to the conclusion that there was no fault and only make the user worse.

    There is inherent bias in the system too - it is programmed to give confirmational bias to encourage its ongoing use much like social media is. It is rubbish but it works in terms of user engagement as people often want to be "right" rather than have the facts.

    I think we are fast heading down the route where people are buying into it too much already so it pays to know the faults and adapt our weighing of the results with this in mind.