Awareness (non-clinical term) of: "AI psychosis"

This BBC article and it's interviews highlight it is probably worth people being aware of (a non-clinical term): "AI psychosis".

Some people may be more potentially vulnerable to this phenomenon as society normalises the use of AI - as sometimes people can become overly reliant on their AI problem solving use and somewhat lose touch with the trusted people available to them in the real World.

Those at risk may mistakenly convince themselves that AI is already sentient (with them starting to believe they are the only ones who have noticed that development).

The below are extracts from the BBC article (20/08/2025).

- Microsoft boss troubled by rise in reports of 'AI psychosis':

- "AI psychosis": a non-clinical term describing incidents where people increasingly rely on AI chatbots such as ChatGPT, Claude and Grok and then become convinced that something imaginary has become real.

- "Go and check. Talk to actual people, a therapist or a family member or anything. Just talk to real people. Keep yourself grounded in reality."

- Dr Susan Shelmerdine, a medical imaging doctor at Great Ormond Street Hospital and also an AI Academic, believes that one day doctors may start asking patients how much they use AI, in the same way that they currently ask about smoking and drinking habits.

https://www.bbc.co.uk/news/articles/c24zdel5j18o

As Psychology Today puts it (21/07/2025):  

The below are extracts from the Psychology Today article (21/07/2025).

- "Amplifications of delusions by AI chatbots may be worsening breaks with reality."

Key points

- Cases of "AI psychosis" include people who become fixated on AI as godlike, or as a romantic partner.

- Chatbots' tendency to mirror users and continue conversations may reinforce and amplify delusions.

- General-purpose AI chatbots are not trained for therapeutic treatment or to detect psychiatric decompensation.

- The potential for generative AI chatbot interactions to worsen delusions had been previously raised in a 2023 editorial by Søren Dinesen Østergaard in Schizophrenia Bulletin.

https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reference

Østergaard, SD. (2023) Will Generative Artificial Intelligence Chatbots Generate Delusions in Individuals Prone to Psychosis? Schizophrenia Bulletin vol. 49 no. 6 pp. 1418–1419, 2023 https://doi.org/10.1093/schbul/sbad128

  • A more accurate term, which has been in use for many years but never really caught on with the general public is "Machine Learning". 

  • Ouch, user conversations, doesn’t sound too private. People should be made much more aware before they download however they won’t be because privacy isn’t a priority. It’s money and data that makes the world go around, since when did anyone powerful and rich care about morals. 

  • Just missing your blood type and eye colour then 

  • My understanding is it not tied to you.

    Here is a useful explanation on the subject:

    https://www.pcmag.com/articles/which-ai-chatbot-collects-the-least-of-your-data#

    For example:

    Gemini's (Googles AI) reported data collection includes your browsing history, contact list, emails, photos, precise location, search history, texts, and videos.

  • But there is a difference between harvesting data in a consolidated way to improve the tool and storing it related to you specifically. My understanding is it not tied to you. But there are different levels of service and different AIs.

  • Oh no! It’s shocking to hear that, yet not surprising. 

  • Hmm!  This afternoon's different permutation - RE: Google and Grok:

    BBC:

    "Hundreds of thousands of user conversations with Elon Musk's artificial intelligence (AI) chatbot Grok have been exposed in search engine results - seemingly without users' knowledge.":

    www.bbc.co.uk/.../cdrkmk00jy0o

  • But it is a problem for some people who have mental illnesses like schizophrenia or who have learning disabilities. The resources are not in place to educate people and to provide safeguarding for vulnerable adults. 

  • You might also like to know I saw today that a MIT study showed 95% of AI implementations by companies are not making any additional revenue.

    It has almost certainly been hyped, but it will find it's applications.

  • Chatgpt does not yet reliably pass a Turing test.

    If it does in the future perhaps some comment needs to be added to alert people 

    But even if people think it is equivalent to a real person, is it any different to getting dodgy info from your neighbour, friend, guy down the pub or mum? And is it any less reliable than these?

    It is more reliable than a lot of stuff online in social media.

    I think it is only a problem if it is seen as having the final say.

  • Does the AI still process and store the data elsewhere though

    To answer this, think why would a company give you free access to such an expensive system to build and run?

    The answer is data harvesting and analysis so they can resell it or target you for advertising to let them make money.

    If you read the terms and conditions you will find they retain the right to keep this data so it is all above board and you agreed to it. The fact it is written in legalese and hidden on page 147 of the T&Cs in 4 point font is irrelevant for them.

  • That is rather concerning. 

    I’m wondering if the term “intelligence” in AI should be changed to reflect that it has been generated by a machine from information fed in by humans. Perhaps that wouldn’t make any difference, yet I can quite understand how some people might believe they are having a conversation with a human. 

  • Does the AI still process and store the data elsewhere though and just erase the data from the app itself? I would imagine for AI to be released to the general population there would be a give and take of information in order for it to learn and improve experience for users. With millions if not billions of users it wouldn’t need a team of people to teach it slowly and Instead use the unsuspecting everyday human at home for a much quicker and richer education on an infinite amount of subjects provided by the user. 

  • The AI where you log in can store previous history.

    Something on your phone, like Gemini on Android, don't keep history. So if I close the app and relaunch it is likee starting again. I this case it is only really a problem if you keep one session going for a long time with dozens of questions.

  • I use ChatGPT daily, ChatGPT just gives me the answer instantly without me having to google loads of things. Although I know it’s just software, and that it dredges through massive amounts of data on the internet to give me an answer, it’s responses are so humanistic that it can be easy to forget you’re talking to just a chatbot. 

  • Maybe I'm not such a dinosaur afterall and am right to be distrusting, although I've never used something like chatgpt, getting an Ai generated answer to any question is more than enough for me.

  • The fear is that people stop being aware of the artificial part and only listen to the intelligence making them easily influenced and manipulated by it. If some-thing comes along with all the right answers you can’t create using your own brain of course this would seem like the best creation ever but also where is the exact cut off point where we just stop learning and making decisions ourselves and instead hook up to the internet and ask a robot if we should end our 30 year marriage? It has major problems and they are only just beginning. The worst effected will be the younger generation without a doubt.

  • I think it is saying common sense.

    It's a tool that you can use. But you should not elevate it above this or think it is infallible. The answer you get depends heavily on the question you ask (this is similar to dealing with regulatory bodies).

    I don't think they should be used for entertainment or to have conversations with. Here they will, like social media, tend to learn and funnel you down a certain path. I don't think they should seek to reinforce views.

    Random questions of it can give the most probable answers though, which can be useful.

    As an aside I also read elsewhere that AI that use AI generated data to learn from effectively go crazy. I think it highlights the importance of emotions and other regulating mechanisms people have that moderate views. You have some switch that says, this does not feel right.

  • I feel awareness; is the armour / safeguarding guard rails to have in your virtual tool box of life skills - at any age.

  • This is really worrying, I don’t trust AI or those who have created it. I’ve occasionally used it out of boredom to make cool pictures but that’s about it. I really do not think anyone should rely on it or use it on a day to day basis. The younger generation are particularly susceptible to the misuse of it given they do not know a world without such advanced technology. I grew up with a black and white tv and didn’t have a video play till almost a teenager. There are some vulnerable people out there who may not be able to split technology from reality. As screens almost become part of the human anatomy and some need surgery to remove their phones from their hands it’s time to take the threat of AI very seriously. I do not like change but even with that in mind I do not think it’s a healthy tool for the everyday person nor does it hold any moral values or ability to feel sorry for anyone it hurts. The inventors have let it loose on society and it’s already causing irreparable damage.