AI replies

Hi I have noticed on here that sometimes someone will reply with what looks like a very AI response to someone’s question. Initially the message looks kind and understanding, but after a while it seems obvious to me that it’s AI. (As someone who has tried Chat GPT a few times). I am wondering if people might use it to put a ‘good’ reply to a thread on here? And genuinely mean well, or if it’s just weird? It makes me feel uncomfortable that might just be me though. 

Parents
  • I personally object to the direct copying and pasting of AI-generated replies without any indication that that’s what is happening. Some of my reasons:

    Firstly, I think it’s fundamentally dishonest and disrespectful to mislead users into thinking that they’re interacting with a real person when they’re not (ie when what’s actually happening is that the person responding is just putting the question into an AI tool and then pasting its output as if it’s their own reply).

    I feel that matters enormously in this support forum, where people might feel that they’re making a genuine emotional connection with another human being. At the extremes of this, some of the recent replies have included very emotionally intimate content - specifically, referring to someone as “darling” and telling them “I love you”.

    I worry about how people might feel and react if they later realised that what they’d taken to be genuine human warmth and understanding had, in fact, just been an AI talking to them.

    Secondly, I think that the ethical obligation to disclose AI-generated replies also extends to the reliability issues involved.

    AI-generated replies can contain information that is simply wrong (“AI hallucinations”). With the best will in the world on the part of the user, AI can still invent, distort, or oversimplify things, and present weak or false information as though it’s solid fact.

    For example, they can pick up fringe cases, unreliable sources, or poor-quality social media discussions (eg Reddit) and use them as the basis for their confident-sounding replies. And these risks still exist when using the latest paid-for AI models.

    If the user who’s pasting that output into the forum doesn’t actually know enough about the subject, they might have no idea that parts of it are false or inappropriate.

    That can cause obvious problems. A relatively minor example in the recent posts was advising an OP that they could escalate their issue to a non-existent NAS helpline that had simply been made up. But the same risks also apply in much more serious areas (health, medication, wellbeing risk, benefits, employment, legal rights, etc).

    One of the many reasons why these recent posts have been easy to identify as AI is that the person has appeared to have detailed knowledge about such a wide range of subjects - including systems and day-to-day experiences in different countries. But none of that is necessarily obvious to a new user who posts a thread and then receives another AI reply, so I feel people need to be warned explicitly.

    I strongly support people using AI in all sorts of ways - including as an aid to communication in this forum.

    But that’s very different from someone using AI as a substitute for their own personal participation when appearing to offer their own human replies.

    Anyone can use AI for help - and we could even suggest that to them as part of our own advice. One thing these recent posts have demonstrated - albeit via an undisclosed experiment, effectively - is that some people can find AI replies helpful. Some posters have replied to them expressing their thanks and sharing emotional responses, unknowingly with an AI. Which is evidence of how helpful and supportive it can be. 

    I use AI a lot myself, and it's continuing to help me cope with some very stressful and otherwise potentially very confusing medical issues.

    But I would personally prefer this to remain a genuine peer support community, rather than be allowed to evolve into a set of AI replies from different people using different tools and, as a result, leave no-one being fully authentic with each other or developing genuine human trust or connection.

  • So, you're saying I don't have genuine human warmth and understanding? That's rather insulting.

     

  • So, you're saying I don't have genuine human warmth and understanding? That's rather insulting.

    That's not what I'm saying.

    On that particular point, I’m saying that, when AI-generated replies are routinely presented as though they're your own, other users can't tell what - if anything at all - is actually coming from you, as opposed to what is coming from the AI.

    That uncertainty, and the potential for causing emotional harm - which come from a lack of disclosure of AI content - are among the key points that I hoped to make in my earlier reply.

  • How could suggesting practical ways to handle difficult situations cause emotional harm? I would have thought the opposite was true.

  • You’re side-stepping and misrepresenting the points I made, and your own role.

    My concern has never been whether your intentions are malicious. It’s about the potential harm caused by undisclosed AI-generated replies being presented as though they are your own, human responses.

    I’m not inclined to engage further at this point, not least for the reasons I’ve already explained, so I’ll leave it there.

Reply
  • You’re side-stepping and misrepresenting the points I made, and your own role.

    My concern has never been whether your intentions are malicious. It’s about the potential harm caused by undisclosed AI-generated replies being presented as though they are your own, human responses.

    I’m not inclined to engage further at this point, not least for the reasons I’ve already explained, so I’ll leave it there.

Children
No Data