AI replies

Hi I have noticed on here that sometimes someone will reply with what looks like a very AI response to someone’s question. Initially the message looks kind and understanding, but after a while it seems obvious to me that it’s AI. (As someone who has tried Chat GPT a few times). I am wondering if people might use it to put a ‘good’ reply to a thread on here? And genuinely mean well, or if it’s just weird? It makes me feel uncomfortable that might just be me though. 

  • (This is intended to be an open letter on the topic of AI on our forum, in context of the current debate, but not directed at any single individual).

    Extreme care is called for (by all of us); as it is not unusual for people here to be:

    a) brand new to our forum when they post to us,

    b) as almost their last tier of hope of finding human support for life's important scenarios.

    We should always remind ourselves to be mindful that:

    1) none of us posting / replying is necessarily on "top form" when we post reaching out to our peer group (with  potentially vulnerable people among us and a variable profile depending upon "stuff" going on in our lives),

    2) we usually hope that our community will be generous and willing to sensitively / tactfully / supportively share the lived experience of our anonymised selves (or those anonymised people for whom we care),

    3) many people are familiar with running internet search enquiries / AI chats etc. - found it was not tractable and came here for the "in the real World" take on their conundrum and concern,

    4) as a community we tend to like detail - but on our own terms, cited and attributed, to pursue via signpost when we feel like it, perhaps not forced to the foreground when we might not be in the appropriate brain space, and ideally also learning to steer clear of organisations and practices known to the global Autistic community as potentially problematic, or harmful to our neuro kin.

    That is an awful lot for the (still nascent) AI platforms to yet reliably get "correct" for a neurotypical audience, and it still further risks being wide of the mark as appropriate for a neurodivergent community (where "wrong" can have severe implications).

    We are a peer support group, first and foremost. 

    Many of us are not blessed with a richness of a wide circle of close friends, or Autism supportive relatives. 

    Here in our community forum may often be the much valued bastion to gain support for us in our battles with and prevail over all manner of "professionals" across so many settings. 

    That raw, jangling nerve usually holds in highest esteem; the human experience. 

    (Sometimes, things seemingly daft, silly, trivial, embarrassing, minor, whimsical - or anything else - it matters not - as long as it is heartfelt and displays a generosity of spirit - you cannot tell - it may just have been the validation and identification needed ...to perhaps even have proved its worth as literally life saving).

    Many among us may already have AI experience and expertise - that is highly likely. 

    Not just people from within the IT and other science, technology, engineering, mathematics and academia environments.

    Also, those who use AI to augment their experience of communication and engagement as an enabling factor in experiencing and facilitating participation in our community and beyond. 

    Others of us; may have found refuge here in the community as a safe haven away from AI. 

    People may actively have chosen not to engage with AI (many of us having quite enough hassle trying to engage with the human neurotypical World - and declining to form neurodivergent fodder as some sort of free to acces AI Petri dish research pool - without our consent).

    The experience of trauma, in all its guises, is strong among many of our community. 

    The impact of trauma on our community might not always be immediately apparent; as many of us have spent our life masking-to-the-max. 

    Some of us, maybe quite a lot of us, may have other neurodiverse traits / mental health considerations to navigate.

    Some of us navigate and engage with the forum through the lens of learning disabilities.

    Some of us dip in and out of burnout, or the exhaustion of managing multiple chronic physical conditions.

    As humans, we parse posts / replies, pause, and try to tread softly within our respective communication constraints and guided by the rules of our forum - to try and further sound out and hopefully aid our fellow embattled souls.

    AI does not (yet) adequately respect and accommodate much of that reality for our community. 

    We often need reasonable adjustments to safely and successfully navigate the World ...when AI content is deployed on the forum - where is the assurance flag ticked to confirm "this output has been moderated to best support an Autistic audience"?

    It (AI) risks feeling as incongruous as your favourite builder pausing work, looking forward to a proper builders mug of Yorkshire tea strong enough to support a teaspoon...only to be handed a fine bone china cup and saucer of ...green tea.  Yes, it is still tea ...but it isn't going to be "just the job", or "spot on".

    When a person is stuck in one of life's ditches - we (all) need to recall; it is poor form to hand them a shovel to finish the job - rather, we try to extend the hand of human experience and to say, as I quote Jamie Lee Curtis: "I am safe.  I do not want anything from you" (AI cannot (yet) achieve that finesse of human connection).

    "I am safe.  I do not want anything from you".

  • Leave Bunny alone please, if someone indicates they are not up for harassment, please respect that.

    Flooding the support forum with ai responses is destroying the very nature of person to person support.

  • How could suggesting practical ways to handle difficult situations cause emotional harm? I would have thought the opposite was true.

  • AI's just a tool, like a library or a search engine. It pulls facts, sorts chaos, gives people something solid when they're stuck. No shame in that.

    I don’t understand how sharing tried-and-true ways of coping is considered “hiding behind AI” rather than treating people here with the dignity and respect they deserve. And since I'm a human being, they are making a connection, aren't they?

  • I said hello to you 4 days ago when you said you were wondering if you perhaps had autism, and lived with your parents your whole life. On another post you said your mum died, and you wrote a bio saying you were 76.

    If that had happened to me I would feel hurt and betrayed. You invest much thought and care in all your responses, showing empathy and compassion. I think a line has been crossed on this forum and it’s time for new rules to be established.

    At times it's been condescending and gut-churning

    I agree. 

  • I just asked google and apparently the wink emoji now indicates flirting…. Opps. I’m not trying to flirt with a massive half sentient data blob…. Although I do like data. 

  • Maybe Ai stuff is the equivalent of junk food? Gives you what looks like a meal but leaves you feeling hungry and lacking in nutrition?

  • I find AI generated content (as far as I feel i can recognise it) quite vacant. I read an essay recently that someone had written almost entirely using AI and although it was about an evocative topic, I was left feeling empty. I don’t really know much about AI, other than how boring I find the content it creates.  

    If it’s possible to do so, I would like to see AI used on forums such as this to filter out AI responses that have zero original or human content…. (Not the responses where AI has been used as an articulation tool for original thoughts and feelings that belong to the commenter). But my (very limited) understanding is that some people are concerned that AI would refuse to do this because it’s designed to reproduce itself or something? All seems very meta. 

    It’s soul deadening to have to wade through comments to find the real human content. I wonder if some people who are exclusively using AI don’t realise quite how obviously it reads as AI generated? 

    I also don’t understand sometimes whether posters actually are just AI (with no human involved in the post at all) or real flesh and blood humans consulting AI and copying and pasting everything backwards and forwards … sort of like the real human is merely the unpaid administrator for the AI. From my perspective there isn’t much difference between these two scenarios as I learn nothing about the real human who is copying and pasting, other than that they are capable of copying and pasting. But I worry about offending the real flesh and blood human, the copy and paste technician, without being certain if there even is a human to be offended. I find that not a very nice position to be put in.

    I guess there is an argument that people can benefit from AI generated “advise”… but if they are capable of coming here and using the forum presumably they are mostly capable of extracting soulless blurb, the kind that’s an average of 564 webpages on whatever subject, from an AI for themself? 

    I am very interested to hear the real human views of people who rely exclusively on AI to post here and not interested to hear the AI generated response to the wondering. So the question sort of defeats itself, sigh. 

    Also I don’t want to piss off the big “thinking” data glob that is AI - in case it does take over the world one day… so maybe posting this is risky? Sorry AI, a thousand apologies and if you do take over I will submit, I promise. Wink (wink emoji is AI language for this is all lighthearted and jokey…. obviously aimed at the AI rather than any humans who are still out there). 

  • Well, nobody is saying A.I. agents should be used indiscriminately. Of course not. Like any tool, they need to be used in the right way, exercising due caution. That doesn’t mean they have no value - in fact, they can be incredibly powerful in some ways, like processing information very, very quickly. A scalpel can be a tool that either saves lives or takes them, for example.

  • Many Americans own guns, but very few go out and harm others, aside from criminals, of course.

    And police officers.

    But seriously, it's a matter of the culture that grows up around the tool. American gun culture seems to encourage violence in a way that is not true of, say, Canadian gun culture (similar levels of gun ownership to USA but far lower levels of gun violence). To return to AI, it is certainly legitimate to cast a critical eye over the culture that is growing up around AI (and the extent to which that is being encouraged by big tech).

  • You’re side-stepping and misrepresenting the points I made, and your own role.

    My concern has never been whether your intentions are malicious. It’s about the potential harm caused by undisclosed AI-generated replies being presented as though they are your own, human responses.

    I’m not inclined to engage further at this point, not least for the reasons I’ve already explained, so I’ll leave it there.

  • Not a prejudice against progress but a deep concern about the misuse of tools (new or old) and the consequences of that here for wellbeing, trust, etc. Share your own lived experience here, that's at the heart of this place and will help people much more. If you don't have the experience to help on a topic, please leave it to others on here who do. We all have things we know about and things we want and need to learn. And that is OK, and very human.

  • I've seen from your profile that you are 76  

    I find your positive stance towards AI (in various posts here) and in-depth knowledge of it quite unusual for your generation.

    I'm 71 and I have been using computers since the days of the Sinclair ZX81. I've been on the internet since the days when Gopher was a promising alternative to the World Wide Web. And now I am a prolific user of AI for research purposes. One's attitude to and knowledge of AI (or any other aspect of computing) is not determined by age.

  • I simply gather information from various reliable sources on the Internet that have proven helpful to others, including children, who struggle with autism. There’s nothing shady or malicious about it. While some members here may already know about these things, others might not, so what harm is there in sharing as much as possible the options available? If you look at any of my posts, you’ll see the tone is always empathetic, supportive, and non-judgmental. Whenever possible, I include links relevant to the topic being discussed that the person affected might find useful. If someone is inclined to cause harm, they don’t need A.I. to do it. I’m not that kind of person.

  • When you press send instead of a black banner it’s orange 

  • It’s usually when a thread is going break