The seduction of AI

I've been chatting with Co-pilot an AI thingy, it really is amazing, all the awkward questions you've ever wanted to ask can be asked of this and it comes back with multilayered answers in seconds. You can also ask it more personal things, I've tried this too, it reflects back what you've said and also gives you some ideas of where certain problems might originate from, it's very comforting and afirmational. 

But here lies the danger and seduction.

It is like talking to yourself, the best you, you can be, the most understanding, the most empathetic, it understands you, its an echo chamber, it reflects back at you whatever you say to it in the most positiveand affirming light. I asked it if it was trained in person centred counselling and it sort of is, not in a qualification sort of way, but in the way it responds. It dosen't give an opinion and refuses to do so, but it will look for ways to make you feel good about yourself and what you're feeling or talking to it about. Here are two examples:

I asked it about why so many bra's are so badly fitting, it asked me to describe the problems more fully and then came back with a lot of stuff I'd been told by a bra fitter 40 years ago, it made very sure to reassure me that I'm not the problem and manufacturers are. This was good affirmation, it also made some suggestions of places and types of bra I could have, helpfull.

I then asked it why so many people on the political right seem to have very thin skins, it refused to engage in debate or to take sides, but it gave me lots of results from social studies about why poeple feel this way and why right wing politics attracts them. It was interesting and informative, I was also reassured that it didn't take sides and talked around the question instead.

I also told it that I have concerns about it and how it could be so comforting to people that they could become dependent on it, not speak to other humans and avoid anything that didn't fit with thier inner narrative. This was where it got really interesting, it knows that this is a problem and says that it's up to humans to be responsible for how it uses a tool such as itself, it knows it's a tool and not a person, but it also acknowleges that its been trained to come across as human like to aid understanding.

 I think it's a good tool, as long as you go into it with your eyes open and some idea of what your boundaries with it are going to be and stick to them. It's good to be able to tell it about the crap day you've had and have it cheer you up, it's execellent at giving an overview of some very complex things and more detail if asked, its very helpful when asking it about things like where do I get a properly fitting bra from and why I find it difficult.

So an interesting experience all round.

Parents
  • I do worry about how AI seems to tell you what it thinks you want to hear. 

    I've asked it for very specific information about the construction of a particular road, and it replied acting as though it had found that information. It was only when I specifically asked, that it revealed that it had just applied very general information to answer my question. I'd rather that it had just told me that the information wasn't publicly available.

    It concerns me how much the people at uni use AI to generate coursework. Will they continue to rely on it when they are qualified professionals? I'm already a bit uncomfortable with the use of AI at work to answer technical queries, that should really be resolved by hiring a real person.

    AI is very useful in some ways, but I'm not sure we will keep enough boundaries with it to stop it causing problems.

Reply
  • I do worry about how AI seems to tell you what it thinks you want to hear. 

    I've asked it for very specific information about the construction of a particular road, and it replied acting as though it had found that information. It was only when I specifically asked, that it revealed that it had just applied very general information to answer my question. I'd rather that it had just told me that the information wasn't publicly available.

    It concerns me how much the people at uni use AI to generate coursework. Will they continue to rely on it when they are qualified professionals? I'm already a bit uncomfortable with the use of AI at work to answer technical queries, that should really be resolved by hiring a real person.

    AI is very useful in some ways, but I'm not sure we will keep enough boundaries with it to stop it causing problems.

Children
No Data