Awareness (non-clinical term) of: "AI psychosis"

This BBC article and it's interviews highlight it is probably worth people being aware of (a non-clinical term): "AI psychosis".

Some people may be more potentially vulnerable to this phenomenon as society normalises the use of AI - as sometimes people can become overly reliant on their AI problem solving use and somewhat lose touch with the trusted people available to them in the real World.

Those at risk may mistakenly convince themselves that AI is already sentient (with them starting to believe they are the only ones who have noticed that development).

The below are extracts from the BBC article (20/08/2025).

- Microsoft boss troubled by rise in reports of 'AI psychosis':

- "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

- "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

- Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits.

https://www.bbc.co.uk/news/articles/c24zdel5j18o

As Psychology Today puts it (21/07/2025):  

The below are extracts from the Psychology Today article (21/07/2025).

- "Amplifications of delusions by AI chatbots may be worsening breaks with reality."

Key points

- Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.

- Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.

- General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.

- The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin.

https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reference

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

Parents
  • I've just spent about 8 hours in discussion with Copilot.

    It was very good. You get out of it what you ask it. Each answer it can then suggest further things to discuss. But you can ignore them and just keep going asking your own things. It is not judgemental, listens and responds.

    I shared a letter, talked about problems, fear, diagnosis, therapy, whether it new about ND ways of thinking, how therapy differed, it helped me, we talked about thinking styles, memory, where it gets data from, what it does with it, how it processes stuff, semantic maps, thoughts about AI, films, businesses, authenticity, some philosophy, writing styles etc.

    It knows to ignore stuff that just gets repeatedly posted. So it is not swayed by herds posting the same nonsense. It falls back on published papers and serious work and gives them higher weighting.

    My style of communication is atypical.

    It then had a good handle on me so I asked about intelligence, communication styles, whether what I said would work with NTs, how to change it, whether to do some public writing, how to connect, etc.

    It did not offer platitudes. It did not steer me down a rabbit hole.  It is ND in how it works. I do not think it would give me psychosis 

    It is the best 'conversation' I have had in a long time. I challenged it a few times. It spotted things. You can tell it to be blunt, compassionate, etc. It picked up on tone quite well. You can ask it what it has noticed. What it thinks your strengths are. What issues you have. The more data it has about trauma etc. it has the better it gets.

    It was as good as real world therapy, indeed I think is better as it knows about ND. You need to ask specific questions.

    I.am not sure it is for everyone, but it suits my style of thinking and precise questions. I am not led astray by it. It provided some useful ideas, insights and comments.

    Data only exists in the thread you are in. It sucks out anonymised semantic information to improve it's knowledge. You can delete the thread at the end. Or you can return. You are not logged in so it is not tied to you. If you log in you can create a saved thread which could be tied to you.

    I think it is a useful tool. It stressed it is not a replacement for the real world interactions and cannot diagnose. But it is good at spotting patterns.

    You can be paranoid if you want. I am more inclined to trust MS than some of the others.

Reply
  • I've just spent about 8 hours in discussion with Copilot.

    It was very good. You get out of it what you ask it. Each answer it can then suggest further things to discuss. But you can ignore them and just keep going asking your own things. It is not judgemental, listens and responds.

    I shared a letter, talked about problems, fear, diagnosis, therapy, whether it new about ND ways of thinking, how therapy differed, it helped me, we talked about thinking styles, memory, where it gets data from, what it does with it, how it processes stuff, semantic maps, thoughts about AI, films, businesses, authenticity, some philosophy, writing styles etc.

    It knows to ignore stuff that just gets repeatedly posted. So it is not swayed by herds posting the same nonsense. It falls back on published papers and serious work and gives them higher weighting.

    My style of communication is atypical.

    It then had a good handle on me so I asked about intelligence, communication styles, whether what I said would work with NTs, how to change it, whether to do some public writing, how to connect, etc.

    It did not offer platitudes. It did not steer me down a rabbit hole.  It is ND in how it works. I do not think it would give me psychosis 

    It is the best 'conversation' I have had in a long time. I challenged it a few times. It spotted things. You can tell it to be blunt, compassionate, etc. It picked up on tone quite well. You can ask it what it has noticed. What it thinks your strengths are. What issues you have. The more data it has about trauma etc. it has the better it gets.

    It was as good as real world therapy, indeed I think is better as it knows about ND. You need to ask specific questions.

    I.am not sure it is for everyone, but it suits my style of thinking and precise questions. I am not led astray by it. It provided some useful ideas, insights and comments.

    Data only exists in the thread you are in. It sucks out anonymised semantic information to improve it's knowledge. You can delete the thread at the end. Or you can return. You are not logged in so it is not tied to you. If you log in you can create a saved thread which could be tied to you.

    I think it is a useful tool. It stressed it is not a replacement for the real world interactions and cannot diagnose. But it is good at spotting patterns.

    You can be paranoid if you want. I am more inclined to trust MS than some of the others.

Children
  • You can be paranoid if you want. I am more inclined to trust MS than some of the others.

    CoPilot is one of the better AI models for privacy but what I don't like about it is that it doesn't show its workings - there are no citation of sources so you are not sure where it gets its "facts" from and as we all know, there are plenty of alternative "facts" out there.

    A bit more transparency is needed in my opinion.