I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have:
a) used the format to poke fun at the other user (Toby)
b) made the last lines rhyme.
Also, I wanted to check further so I looked up the account and it’s suspended. https://x.com/AnnetteMas80550 Not definitive proof, but certainly evidence in that direction.
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.
I had a look, and no, I read it as a bot. I think if it were a human writing a witty response, they would likely have:
a) used the format to poke fun at the other user (Toby)
b) made the last lines rhyme.
Also, I wanted to check further so I looked up the account and it’s suspended. https://x.com/AnnetteMas80550
Not definitive proof, but certainly evidence in that direction.
That’s interesting, because
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.