Bots on the forum

So, I'm slightly less grumpy than yesterday, as it's cooler.

Nevertheless, something else is bothering me, quite a lot.

Bots.

I know others of our Number are particularly concerned with this too.

Some bots are easy to spot as they leave links.

Others, make an innocuous post without a link but then later another post with a dubious link.

However ..... I've noticed another trend where a 'person' joins the forum to just post the once on a thread.

The post usually jars with me because although it sounds helpful, it doesn't sound to me like it comes from a real person and not an autistic person either.

The responses tend to me what I'd describe as 'glib'.

Question 1: Has anyone else noticed this?

It's happened on threads the last couple of days.

Do you think it could be bots too even though the response sounds relevant?

Also, I've occasionally wondered whether a 'person' is a bot even though they interact so I have another question.

Question 2: Is AI so well advanced now that bots can imitate humans in the way they respond on a thread and interact?

ie. they are not just posting in a vacuum but appear to be relating to you directly and your own responses.

Parents
  • A reply to one of your threads, Debbie, sounded straight out of the mouth of Chat GPT, but I didn’t want to say anything in case they are a genuine person. 
    Yes, this is a risk!  I have got it wrong on a couple of occasions, and when I do, it makes me feel AWFUL!

    I do share the concerns that you and others have voiced, Debbie. But it also really upsets me whenever I see any fellow user here challenging potential bots to pass some kind of communication / humanity test.

    In my view, the risk of just ONE person being made to feel unwelcome and/or quitting this community due to being challenged in that way - when they are actually a fellow autistic human - is just too great to justify any of us requiring them to jump through any evidential hoops.

    I lurked for a very long time before I even joined this community. I then continued to lurk, including - in part - because I felt afraid of being met with a challenge if and when I did summon the courage to actually post anything.

    We know that some people come here in a very distressed state, after struggling for years without support of any kind. Advice or moral support from this community can make a really positive difference to their lives. And as  says, people are also increasingly using AI to help with their communication needs. So, even when something looks suspicious, it may not be.

    I just don't tend to reply to anything that looks like an AI-generated précis and that doesn't actually ask for advice or help. Instead, I often keep an eye, over time, on that new poster's activity. If any of their posts later gets edited to include a spam link, then I report it and wait (patiently) for the mods to do their thing, in their own time. The same if a spam link later appears in their profile. Generally, I feel that's the best approach.

  • I agree with you! I do find the Nas##### names really hard to keep track of though, which was the suspected Chat-GPT message user name. 

    I have used Chat-GPT extensively for help with research, and in my daily life to answer the extremely specific questions I often have. So I’m pretty familiar with how it writes, there are particular hallmarks about it that you just recognise when you read it.

    Upon reflection I have zero problems with someone inputting what they want to say into Chat-GPT and having it reword it in a way they feel is more understandable to others. But this particular post threw me and I think I was too harsh in judging it. I‘m glad I didn’t say anything in response.

Reply
  • I agree with you! I do find the Nas##### names really hard to keep track of though, which was the suspected Chat-GPT message user name. 

    I have used Chat-GPT extensively for help with research, and in my daily life to answer the extremely specific questions I often have. So I’m pretty familiar with how it writes, there are particular hallmarks about it that you just recognise when you read it.

    Upon reflection I have zero problems with someone inputting what they want to say into Chat-GPT and having it reword it in a way they feel is more understandable to others. But this particular post threw me and I think I was too harsh in judging it. I‘m glad I didn’t say anything in response.

Children
No Data