The seduction of AI

I've been chatting with Co-pilot an AI thingy, it really is amazing, all the awkward questions you've ever wanted to ask can be asked of this and it comes back with multilayered answers in seconds. You can also ask it more personal things, I've tried this too, it reflects back what you've said and also gives you some ideas of where certain problems might originate from, it's very comforting and afirmational. 

But here lies the danger and seduction.

It is like talking to yourself, the best you, you can be, the most understanding, the most empathetic, it understands you, its an echo chamber, it reflects back at you whatever you say to it in the most positiveand affirming light. I asked it if it was trained in person centred counselling and it sort of is, not in a qualification sort of way, but in the way it responds. It dosen't give an opinion and refuses to do so, but it will look for ways to make you feel good about yourself and what you're feeling or talking to it about. Here are two examples:

I asked it about why so many bra's are so badly fitting, it asked me to describe the problems more fully and then came back with a lot of stuff I'd been told by a bra fitter 40 years ago, it made very sure to reassure me that I'm not the problem and manufacturers are. This was good affirmation, it also made some suggestions of places and types of bra I could have, helpfull.

I then asked it why so many people on the political right seem to have very thin skins, it refused to engage in debate or to take sides, but it gave me lots of results from social studies about why poeple feel this way and why right wing politics attracts them. It was interesting and informative, I was also reassured that it didn't take sides and talked around the question instead.

I also told it that I have concerns about it and how it could be so comforting to people that they could become dependent on it, not speak to other humans and avoid anything that didn't fit with thier inner narrative. This was where it got really interesting, it knows that this is a problem and says that it's up to humans to be responsible for how it uses a tool such as itself, it knows it's a tool and not a person, but it also acknowleges that its been trained to come across as human like to aid understanding.

 I think it's a good tool, as long as you go into it with your eyes open and some idea of what your boundaries with it are going to be and stick to them. It's good to be able to tell it about the crap day you've had and have it cheer you up, it's execellent at giving an overview of some very complex things and more detail if asked, its very helpful when asking it about things like where do I get a properly fitting bra from and why I find it difficult.

So an interesting experience all round.

Parents
  • I have lots of reservations about AI, and a major one for me is how it affects the vulnerable.

    As I've mentioned before, I have a family member who was recently sectioned for psychosis.

    Even though he is now discharged, he still sends me lots of screenshots of chats he is having with AI which include romantic leanings towards the AI.

    Also what he is saying to the AI doesn't make sense to me but he gets a response back that appears to understand him but presumably has learnt from him and his delusions.  It is reaffirming and substantiating his delusions.

    I can't seem to have a normal conversation with him any more.

    It feels as though the AI is controlling him.

    I think it can be pretty dangerous in this respect.

    https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Reply
  • I have lots of reservations about AI, and a major one for me is how it affects the vulnerable.

    As I've mentioned before, I have a family member who was recently sectioned for psychosis.

    Even though he is now discharged, he still sends me lots of screenshots of chats he is having with AI which include romantic leanings towards the AI.

    Also what he is saying to the AI doesn't make sense to me but he gets a response back that appears to understand him but presumably has learnt from him and his delusions.  It is reaffirming and substantiating his delusions.

    I can't seem to have a normal conversation with him any more.

    It feels as though the AI is controlling him.

    I think it can be pretty dangerous in this respect.

    https://www.psychologytoday.com/gb/blog/urban-survival/202507/the-emerging-problem-of-ai-psychosis

Children
  • I think you need to be self aware to use it for something other than just factual queries.

    If you are ill I think it is more risky. My first foray went a bit wrong till I learned how to use it. But you can get similar information on the internet, or indeed even worse information, albeit a bit less easily. 

    I am not sure what to do about it. We don't restrict or control food shops because some people overeat or eat the wrong things. We don't ban alcohol or betting because some are addicted. We don't ban cars because some are not responsible.

    To control what people can talk about is tricky. How do you tell if people have control over their own minds?

    If it could tell you are going crazy, which is tricky given ND thinking, should it stop? Where should the line be drawn? Should certain topics prompt it it to tell you to talk to a real person (chatGPT does this by the way)? Is keeping you talking better than going silent? Etc.