Bots on the forum

So, I'm slightly less grumpy than yesterday, as it's cooler.

Nevertheless, something else is bothering me, quite a lot.

Bots.

I know others of our Number are particularly concerned with this too.

Some bots are easy to spot as they leave links.

Others, make an innocuous post without a link but then later another post with a dubious link.

However ..... I've noticed another trend where a 'person' joins the forum to just post the once on a thread.

The post usually jars with me because although it sounds helpful, it doesn't sound to me like it comes from a real person and not an autistic person either.

The responses tend to me what I'd describe as 'glib'.

Question 1: Has anyone else noticed this?

It's happened on threads the last couple of days.

Do you think it could be bots too even though the response sounds relevant?

Also, I've occasionally wondered whether a 'person' is a bot even though they interact so I have another question.

Question 2: Is AI so well advanced now that bots can imitate humans in the way they respond on a thread and interact?

ie. they are not just posting in a vacuum but appear to be relating to you directly and your own responses.

Parents
  • Question 2: Is AI so well advanced now that bots can imitate humans in the way they respond on a thread and interact?

    I think yes, probably. Or at least that's the net effect, to an extent. I think a generic response, particularly in the language used and the sentence construction, can lead me to suspect AI, but even so, it probably won't be long before there's nothing to distinguish an online bot from an online human. 

    I agree with you re. the post you linked BTW, it reads like Chatgtp.

    Of course another problem is that humans can use Chatgtp to write text for them and there may be various reasons for it, not all of them nefarious. 

  • Of course another problem is that humans can use Chatgtp to write text for them and there may be various reasons for it, not all of them nefarious. 

    I can confirm that this does definitely happen here, and that on a couple of occasions, the intent has been wholly nefarious.

Reply
  • Of course another problem is that humans can use Chatgtp to write text for them and there may be various reasons for it, not all of them nefarious. 

    I can confirm that this does definitely happen here, and that on a couple of occasions, the intent has been wholly nefarious.

Children
  • the intent has been wholly nefarious

    Quite possibly. I note a frequency of 'first posts' which I would hazard a guess are AI-generated. Often they are, in fact, requesting personal information under the umbrella (guise?) of shared experience. 

    Maybe these are some sort of harvesting bot, collecting data from which to learn.