“1) AI flooding social media with slop, and 2) foreign governments flooding English-language social media with disinformation. Well, if you take a look at the screenshot at the top of this post, you’ll see the intersection of the two!”
I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her original statement and doesn’t rhyme.
Of course, this could also be an AI being so smart that it can create exactly that impression. This would be the start of the disintegration of social reality.
I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have:
a) used the format to poke fun at the other user (Toby)
b) made the last lines rhyme.
Also, I wanted to check further so I looked up the account and it’s suspended. https://x.com/AnnetteMas80550 Not definitive proof, but certainly evidence in that direction.
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.
That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.
Noah Smith writes about
Check the screenshot in his post and tell me whether you see a rabbit or a duck.
I see a person called A. Mason writing on Twitter and ironically subverting the assumption that she is a bot, by answering with the requested poem but letting it end with a sentence about Biden that confirms her original statement and doesn’t rhyme.
Of course, this could also be an AI being so smart that it can create exactly that impression. This would be the start of the disintegration of social reality.
I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have:
a) used the format to poke fun at the other user (Toby)
b) made the last lines rhyme.
Also, I wanted to check further so I looked up the account and it’s suspended. https://x.com/AnnetteMas80550
Not definitive proof, but certainly evidence in that direction.
That’s interesting, because
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.
In LiveJournal if username has such numbers at the end, it is usually bot.
That sure is often the case, but not always, and therefore it may count as a bit of evidence but not extremely much. Otherwise it would be very easy to automatically delete bots on Twitter and similar platforms.