b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.
That’s interesting, because
b) Wouldn’t an LLM let it end in a rhyme exactly because that is what a user would expect it to do? Therefore, I thought not letting it end in a rhyme is like saying “don’t annoy me, now I am going to make fun of you!”
a) If my reading of b) is correct, then the account DID poke fun at the other user.
So, in a way, your reply confirms my rabbit/duck interpretation of the situation, and I assume people will have many more rabbit/duck situations in the future.
Of course you are right that the account suspension is evidence.
I think it’s very likely we’ll see more situations like this (and more ambiguous situations than this). I recall a story of an early turing test experiment using hand-coded scripts some time in the 2000′s, where one of the most convincing chatbot contestants was one which said something like:
“Does not compute, Beep boop! :)”
pretending to be a human pretending to be a robot for a joke.