Awareness (non-clinical term) of: "AI psychosis"

This BBC article and it's interviews highlight it is probably worth people being aware of (a non-clinical term): "AI psychosis".

Some people may be more potentially vulnerable to this phenomenon as society normalises the use of AI - as sometimes people can become overly reliant on their AI problem solving use and somewhat lose touch with the trusted people available to them in the real World.

Those at risk may mistakenly convince themselves that AI is already sentient (with them starting to believe they are the only ones who have noticed that development).

The below are extracts from the BBC article (20/08/2025).

- Microsoft boss troubled by rise in reports of 'AI psychosis':

- "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

- "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

- Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits.

https://www.bbc.co.uk/news/articles/c24zdel5j18o

As Psychology Today puts it (21/07/2025):  

The below are extracts from the Psychology Today article (21/07/2025).

- "Amplifications of delusions by AI chatbots may be worsening breaks with reality."

Key points

- Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.

- Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.

- General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.

- The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin.

https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reference

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

Parents
  • I think it is saying common sense.

    It's a tool that you can use. But you should not elevate it above this or think it is infallible. The answer you get depends heavily on the question you ask (this is similar to dealing with regulatory bodies).

    I don't think they should be used for entertainment or to have conversations with. Here they will, like social media, tend to learn and funnel you down a certain path. I don't think they should seek to reinforce views.

    Random questions of it can give the most probable answers though, which can be useful.

    As an aside I also read elsewhere that AI that use AI generated data to learn from effectively go crazy. I think it highlights the importance of emotions and other regulating mechanisms people have that moderate views. You have some switch that says, this does not feel right.

Reply
  • I think it is saying common sense.

    It's a tool that you can use. But you should not elevate it above this or think it is infallible. The answer you get depends heavily on the question you ask (this is similar to dealing with regulatory bodies).

    I don't think they should be used for entertainment or to have conversations with. Here they will, like social media, tend to learn and funnel you down a certain path. I don't think they should seek to reinforce views.

    Random questions of it can give the most probable answers though, which can be useful.

    As an aside I also read elsewhere that AI that use AI generated data to learn from effectively go crazy. I think it highlights the importance of emotions and other regulating mechanisms people have that moderate views. You have some switch that says, this does not feel right.

Children
No Data