Do you use AI?

I don't, but I know lots do and I wonder why? I find AI sumeries when I do searches limited and irritating and they just take up space on the page.

I can't imagine talking to Ai about personal problems, or even something simple like where to get trousers the right length.

I've seen some of our posts put through AI and I'm not sure how I feel about it if I'm honest. If it has to learn then I guess we're better teachers than some, but what does it ultimately do with our conversations?

What does it do with our feelings and emotions, it can't feel or emote, isn't it rather like a mask talking to a mask?

Parents
  • No. Nope. Nuh uh. No thanks.

    [Trigger warning ahead.]

    I’ve read a few too many articles on how AI has led folk to suicide to ever ask anything remotely personal of an AI assistant.

  • I find that curious because my experience has been it stops and tells you to call a crisis number and gives you numbers to call if you say anything that could be leading towards suicide 

    I would be interested in knowing what was typed and the results it gave 

  • That’s what they are supposed to do, but if pressed and made into a lover/confidante/etc they can break down and actually recommend it.

  • They continuously update the models. I think most of the cases were from a year or two ago.

    I have found it is quite sensitive and over recommends getting external help now. I had it keep checking I was ok and putting up messages for things I thought were quite innocuous.

    On chatGPT there are two levels. There are what it picks up on and embeds on the text. Then there are additional windows (on the phone app) there are triggered based on the content.

    Could you get it to say something inappropriate? Probably, but it is not easy. And if you are pressing it to play a role, then you are also distorting things. 

    Can you be seduced into anthropomorphising it and attributing to it more capability than it has, yes. It is hard to keep at arms length.

    I believe they have changed the personality elements of it to change the feel.

Reply
  • They continuously update the models. I think most of the cases were from a year or two ago.

    I have found it is quite sensitive and over recommends getting external help now. I had it keep checking I was ok and putting up messages for things I thought were quite innocuous.

    On chatGPT there are two levels. There are what it picks up on and embeds on the text. Then there are additional windows (on the phone app) there are triggered based on the content.

    Could you get it to say something inappropriate? Probably, but it is not easy. And if you are pressing it to play a role, then you are also distorting things. 

    Can you be seduced into anthropomorphising it and attributing to it more capability than it has, yes. It is hard to keep at arms length.

    I believe they have changed the personality elements of it to change the feel.

Children
No Data