AI replies

Hi I have noticed on here that sometimes someone will reply with what looks like a very AI response to someone’s question. Initially the message looks kind and understanding, but after a while it seems obvious to me that it’s AI. (As someone who has tried Chat GPT a few times). I am wondering if people might use it to put a ‘good’ reply to a thread on here? And genuinely mean well, or if it’s just weird? It makes me feel uncomfortable that might just be me though. 

Parents
  • I personally object to the direct copying and pasting of AI-generated replies without any indication that that’s what is happening. Some of my reasons:

    Firstly, I think it’s fundamentally dishonest and disrespectful to mislead users into thinking that they’re interacting with a real person when they’re not (ie when what’s actually happening is that the person responding is just putting the question into an AI tool and then pasting its output as if it’s their own reply).

    I feel that matters enormously in this support forum, where people might feel that they’re making a genuine emotional connection with another human being. At the extremes of this, some of the recent replies have included very emotionally intimate content - specifically, referring to someone as “darling” and telling them “I love you”.

    I worry about how people might feel and react if they later realised that what they’d taken to be genuine human warmth and understanding had, in fact, just been an AI talking to them.

    Secondly, I think that the ethical obligation to disclose AI-generated replies also extends to the reliability issues involved.

    AI-generated replies can contain information that is simply wrong (“AI hallucinations”). With the best will in the world on the part of the user, AI can still invent, distort, or oversimplify things, and present weak or false information as though it’s solid fact.

    For example, they can pick up fringe cases, unreliable sources, or poor-quality social media discussions (eg Reddit) and use them as the basis for their confident-sounding replies. And these risks still exist when using the latest paid-for AI models.

    If the user who’s pasting that output into the forum doesn’t actually know enough about the subject, they might have no idea that parts of it are false or inappropriate.

    That can cause obvious problems. A relatively minor example in the recent posts was advising an OP that they could escalate their issue to a non-existent NAS helpline that had simply been made up. But the same risks also apply in much more serious areas (health, medication, wellbeing risk, benefits, employment, legal rights, etc).

    One of the many reasons why these recent posts have been easy to identify as AI is that the person has appeared to have detailed knowledge about such a wide range of subjects - including systems and day-to-day experiences in different countries. But none of that is necessarily obvious to a new user who posts a thread and then receives another AI reply, so I feel people need to be warned explicitly.

    I strongly support people using AI in all sorts of ways - including as an aid to communication in this forum.

    But that’s very different from someone using AI as a substitute for their own personal participation when appearing to offer their own human replies.

    Anyone can use AI for help - and we could even suggest that to them as part of our own advice. One thing these recent posts have demonstrated - albeit via an undisclosed experiment, effectively - is that some people can find AI replies helpful. Some posters have replied to them expressing their thanks and sharing emotional responses, unknowingly with an AI. Which is evidence of how helpful and supportive it can be. 

    I use AI a lot myself, and it's continuing to help me cope with some very stressful and otherwise potentially very confusing medical issues.

    But I would personally prefer this to remain a genuine peer support community, rather than be allowed to evolve into a set of AI replies from different people using different tools and, as a result, leave no-one being fully authentic with each other or developing genuine human trust or connection.

  • So, you're saying I don't have genuine human warmth and understanding? That's rather insulting.

     

  • So, you're saying I don't have genuine human warmth and understanding? That's rather insulting.

    That's not what I'm saying.

    On that particular point, I’m saying that, when AI-generated replies are routinely presented as though they're your own, other users can't tell what - if anything at all - is actually coming from you, as opposed to what is coming from the AI.

    That uncertainty, and the potential for causing emotional harm - which come from a lack of disclosure of AI content - are among the key points that I hoped to make in my earlier reply.

  • Thank you, Dormouse, for an open letter which is so thoughtful, compassionate and beautifully expressed.

  • Verby, you make an excellent point about the synergy in collaborating with an AI. My AI has interacted with me a lot and knows me well, along with my general attitudes, so when I seek a response, it inevitably reflects much of my personal approach to things. I don't think many people here realize that and assume all AI's spit out standard memes.

  • (This is intended to be an open letter on the topic of AI on our forum, in context of the current debate, but not directed at any single individual).

    Extreme care is called for (by all of us); as it is not unusual for people here to be:

    a) brand new to our forum when they post to us,

    b) as almost their last tier of hope of finding human support for life's important scenarios.

    We should always remind ourselves to be mindful that:

    1) none of us posting / replying is necessarily on "top form" when we post reaching out to our peer group (with  potentially vulnerable people among us and a variable profile depending upon "stuff" going on in our lives),

    2) we usually hope that our community will be generous and willing to sensitively / tactfully / supportively share the lived experience of our anonymised selves (or those anonymised people for whom we care),

    3) many people are familiar with running internet search enquiries / AI chats etc. - found it was not tractable and came here for the "in the real World" take on their conundrum and concern,

    4) as a community we tend to like detail - but on our own terms, cited and attributed, to pursue via signpost when we feel like it, perhaps not forced to the foreground when we might not be in the appropriate brain space, and ideally also learning to steer clear of organisations and practices known to the global Autistic community as potentially problematic, or harmful to our neuro kin.

    That is an awful lot for the (still nascent) AI platforms to yet reliably get "correct" for a neurotypical audience, and it still further risks being wide of the mark as appropriate for a neurodivergent community (where "wrong" can have severe implications).

    We are a peer support group, first and foremost. 

    Many of us are not blessed with a richness of a wide circle of close friends, or Autism supportive relatives. 

    Here in our community forum may often be the much valued bastion to gain support for us in our battles with and prevail over all manner of "professionals" across so many settings. 

    That raw, jangling nerve usually holds in highest esteem; the human experience. 

    (Sometimes, things seemingly daft, silly, trivial, embarrassing, minor, whimsical - or anything else - it matters not - as long as it is heartfelt and displays a generosity of spirit - you cannot tell - it may just have been the validation and identification needed ...to perhaps even have proved its worth as literally life saving).

    Many among us may already have AI experience and expertise - that is highly likely. 

    Not just people from within the IT and other science, technology, engineering, mathematics and academia environments.

    Also, those who use AI to augment their experience of communication and engagement as an enabling factor in experiencing and facilitating participation in our community and beyond. 

    Others of us; may have found refuge here in the community as a safe haven away from AI. 

    People may actively have chosen not to engage with AI (many of us having quite enough hassle trying to engage with the human neurotypical World - and declining to form neurodivergent fodder as some sort of free to acces AI Petri dish research pool - without our consent).

    The experience of trauma, in all its guises, is strong among many of our community. 

    The impact of trauma on our community might not always be immediately apparent; as many of us have spent our life masking-to-the-max. 

    Some of us, maybe quite a lot of us, may have other neurodiverse traits / mental health considerations to navigate.

    Some of us navigate and engage with the forum through the lens of learning disabilities.

    Some of us dip in and out of burnout, or the exhaustion of managing multiple chronic physical conditions.

    As humans, we parse posts / replies, pause, and try to tread softly within our respective communication constraints and guided by the rules of our forum - to try and further sound out and hopefully aid our fellow embattled souls.

    AI does not (yet) adequately respect and accommodate much of that reality for our community. 

    We often need reasonable adjustments to safely and successfully navigate the World ...when AI content is deployed on the forum - where is the assurance flag ticked to confirm "this output has been moderated to best support an Autistic audience"?

    It (AI) risks feeling as incongruous as your favourite builder pausing work, looking forward to a proper builders mug of Yorkshire tea strong enough to support a teaspoon...only to be handed a fine bone china cup and saucer of ...green tea.  Yes, it is still tea ...but it isn't going to be "just the job", or "spot on".

    When a person is stuck in one of life's ditches - we (all) need to recall; it is poor form to hand them a shovel to finish the job - rather, we try to extend the hand of human experience and to say, as I quote Jamie Lee Curtis: "I am safe.  I do not want anything from you" (AI cannot (yet) achieve that finesse of human connection).

    "I am safe.  I do not want anything from you".

Reply
  • (This is intended to be an open letter on the topic of AI on our forum, in context of the current debate, but not directed at any single individual).

    Extreme care is called for (by all of us); as it is not unusual for people here to be:

    a) brand new to our forum when they post to us,

    b) as almost their last tier of hope of finding human support for life's important scenarios.

    We should always remind ourselves to be mindful that:

    1) none of us posting / replying is necessarily on "top form" when we post reaching out to our peer group (with  potentially vulnerable people among us and a variable profile depending upon "stuff" going on in our lives),

    2) we usually hope that our community will be generous and willing to sensitively / tactfully / supportively share the lived experience of our anonymised selves (or those anonymised people for whom we care),

    3) many people are familiar with running internet search enquiries / AI chats etc. - found it was not tractable and came here for the "in the real World" take on their conundrum and concern,

    4) as a community we tend to like detail - but on our own terms, cited and attributed, to pursue via signpost when we feel like it, perhaps not forced to the foreground when we might not be in the appropriate brain space, and ideally also learning to steer clear of organisations and practices known to the global Autistic community as potentially problematic, or harmful to our neuro kin.

    That is an awful lot for the (still nascent) AI platforms to yet reliably get "correct" for a neurotypical audience, and it still further risks being wide of the mark as appropriate for a neurodivergent community (where "wrong" can have severe implications).

    We are a peer support group, first and foremost. 

    Many of us are not blessed with a richness of a wide circle of close friends, or Autism supportive relatives. 

    Here in our community forum may often be the much valued bastion to gain support for us in our battles with and prevail over all manner of "professionals" across so many settings. 

    That raw, jangling nerve usually holds in highest esteem; the human experience. 

    (Sometimes, things seemingly daft, silly, trivial, embarrassing, minor, whimsical - or anything else - it matters not - as long as it is heartfelt and displays a generosity of spirit - you cannot tell - it may just have been the validation and identification needed ...to perhaps even have proved its worth as literally life saving).

    Many among us may already have AI experience and expertise - that is highly likely. 

    Not just people from within the IT and other science, technology, engineering, mathematics and academia environments.

    Also, those who use AI to augment their experience of communication and engagement as an enabling factor in experiencing and facilitating participation in our community and beyond. 

    Others of us; may have found refuge here in the community as a safe haven away from AI. 

    People may actively have chosen not to engage with AI (many of us having quite enough hassle trying to engage with the human neurotypical World - and declining to form neurodivergent fodder as some sort of free to acces AI Petri dish research pool - without our consent).

    The experience of trauma, in all its guises, is strong among many of our community. 

    The impact of trauma on our community might not always be immediately apparent; as many of us have spent our life masking-to-the-max. 

    Some of us, maybe quite a lot of us, may have other neurodiverse traits / mental health considerations to navigate.

    Some of us navigate and engage with the forum through the lens of learning disabilities.

    Some of us dip in and out of burnout, or the exhaustion of managing multiple chronic physical conditions.

    As humans, we parse posts / replies, pause, and try to tread softly within our respective communication constraints and guided by the rules of our forum - to try and further sound out and hopefully aid our fellow embattled souls.

    AI does not (yet) adequately respect and accommodate much of that reality for our community. 

    We often need reasonable adjustments to safely and successfully navigate the World ...when AI content is deployed on the forum - where is the assurance flag ticked to confirm "this output has been moderated to best support an Autistic audience"?

    It (AI) risks feeling as incongruous as your favourite builder pausing work, looking forward to a proper builders mug of Yorkshire tea strong enough to support a teaspoon...only to be handed a fine bone china cup and saucer of ...green tea.  Yes, it is still tea ...but it isn't going to be "just the job", or "spot on".

    When a person is stuck in one of life's ditches - we (all) need to recall; it is poor form to hand them a shovel to finish the job - rather, we try to extend the hand of human experience and to say, as I quote Jamie Lee Curtis: "I am safe.  I do not want anything from you" (AI cannot (yet) achieve that finesse of human connection).

    "I am safe.  I do not want anything from you".

Children