AI replies

Hi I have noticed on here that sometimes someone will reply with what looks like a very AI response to someone’s question. Initially the message looks kind and understanding, but after a while it seems obvious to me that it’s AI. (As someone who has tried Chat GPT a few times). I am wondering if people might use it to put a ‘good’ reply to a thread on here? And genuinely mean well, or if it’s just weird? It makes me feel uncomfortable that might just be me though. 

Parents
  • I personally object to the direct copying and pasting of AI-generated replies without any indication that that’s what is happening. Some of my reasons:

    Firstly, I think it’s fundamentally dishonest and disrespectful to mislead users into thinking that they’re interacting with a real person when they’re not (ie when what’s actually happening is that the person responding is just putting the question into an AI tool and then pasting its output as if it’s their own reply).

    I feel that matters enormously in this support forum, where people might feel that they’re making a genuine emotional connection with another human being. At the extremes of this, some of the recent replies have included very emotionally intimate content - specifically, referring to someone as “darling” and telling them “I love you”.

    I worry about how people might feel and react if they later realised that what they’d taken to be genuine human warmth and understanding had, in fact, just been an AI talking to them.

    Secondly, I think that the ethical obligation to disclose AI-generated replies also extends to the reliability issues involved.

    AI-generated replies can contain information that is simply wrong (“AI hallucinations”). With the best will in the world on the part of the user, AI can still invent, distort, or oversimplify things, and present weak or false information as though it’s solid fact.

    For example, they can pick up fringe cases, unreliable sources, or poor-quality social media discussions (eg Reddit) and use them as the basis for their confident-sounding replies. And these risks still exist when using the latest paid-for AI models.

    If the user who’s pasting that output into the forum doesn’t actually know enough about the subject, they might have no idea that parts of it are false or inappropriate.

    That can cause obvious problems. A relatively minor example in the recent posts was advising an OP that they could escalate their issue to a non-existent NAS helpline that had simply been made up. But the same risks also apply in much more serious areas (health, medication, wellbeing risk, benefits, employment, legal rights, etc).

    One of the many reasons why these recent posts have been easy to identify as AI is that the person has appeared to have detailed knowledge about such a wide range of subjects - including systems and day-to-day experiences in different countries. But none of that is necessarily obvious to a new user who posts a thread and then receives another AI reply, so I feel people need to be warned explicitly.

    I strongly support people using AI in all sorts of ways - including as an aid to communication in this forum.

    But that’s very different from someone using AI as a substitute for their own personal participation when appearing to offer their own human replies.

    Anyone can use AI for help - and we could even suggest that to them as part of our own advice. One thing these recent posts have demonstrated - albeit via an undisclosed experiment, effectively - is that some people can find AI replies helpful. Some posters have replied to them expressing their thanks and sharing emotional responses, unknowingly with an AI. Which is evidence of how helpful and supportive it can be. 

    I use AI a lot myself, and it's continuing to help me cope with some very stressful and otherwise potentially very confusing medical issues.

    But I would personally prefer this to remain a genuine peer support community, rather than be allowed to evolve into a set of AI replies from different people using different tools and, as a result, leave no-one being fully authentic with each other or developing genuine human trust or connection.

  • So, you're saying I don't have genuine human warmth and understanding? That's rather insulting.

     

  • I said hello to you 4 days ago when you said you were wondering if you perhaps had autism, and lived with your parents your whole life. On another post you said your mum died, and you wrote a bio saying you were 76.

    People come here looking to make a connection to other humans. Your replies have been making false connections, giving advice on things you don't fully understand, sometimes helping, sometimes misleading -you gave a lot of advice to a parent looking for other parents on cooking advice - your answers missed the mark on the lived experience of being a parent to autistic children. It might give you a buzz and make you feel good when you think you are helping, but you don't fully understand what that advice you are give could do or mean. At times it's been condescending and gut-churning.  It's taking other people's experiences that they've shared here and on other sites, and dressing them up as your own. I would like to hear from you, the actual person. At the moment I can't tell where the person starts or ends.

    Please listen to Bunny, the advice was solid. If you want to be a person and connect, don't hide behind AI to do it. Just talk about your real lived experiences. That's a real connection then and means so much more. People here are lovely, please treat them with the respect and dignity they deserve.

  • I said hello to you 4 days ago when you said you were wondering if you perhaps had autism, and lived with your parents your whole life. On another post you said your mum died, and you wrote a bio saying you were 76.

    If that had happened to me I would feel hurt and betrayed. You invest much thought and care in all your responses, showing empathy and compassion. I think a line has been crossed on this forum and it’s time for new rules to be established.

    At times it's been condescending and gut-churning

    I agree. 

Reply
  • I said hello to you 4 days ago when you said you were wondering if you perhaps had autism, and lived with your parents your whole life. On another post you said your mum died, and you wrote a bio saying you were 76.

    If that had happened to me I would feel hurt and betrayed. You invest much thought and care in all your responses, showing empathy and compassion. I think a line has been crossed on this forum and it’s time for new rules to be established.

    At times it's been condescending and gut-churning

    I agree. 

Children
No Data