Thank you; I wanted to write something like this, but you made the point clearly and concisely.
Some people say they can clearly recognize the output of an LLM. I admit I can’t see that clearly. I just get an annoying feeling, something rubs me the wrong way, but I can’t quite put my finger on it. For example, while reading this article, I had a thought in mind “maybe this text was actually written by a human, and at the end the conclusion will be: haha, you failed at Turing test, now you see how biased you are”.
If I believed that the text was written by a human, I would probably be annoyed that the text is too verbose. But, you know, some real people are like that, too. I would also be like “I am not sure what point exactly are they trying to make… there seems to be a general topic they write about, but they just write their associations with the topic, instead of focusing on what is really important (for them). But again, actual people probably write like this all the time, ask any professional editor. Writing well is a skill that needs to be learned. I mean, the LLM was trained on human texts! The texts made by verbose people are probably over-represented in the corpus. So I would be like “dude, rewrite this shorter, make your points clearly, and remove the irrelevant parts”, but I could totally believe it was written by a human.
Also, the arguments introduced by the LLM are annoying, but those are arguments that actual people make. Some of them just feel out of place on LW. I care about whether a text is correct, not about whether it is authentic. If the LLM could generate a 100% reliable Theory of Everything, I wouldn’t mind that it is a product of artificial thinking; I would be happy to read it! What I hate is automatically generated human-like mistakes. I can forgive the actual humans, but why should I tolerate the same thing from a machine? If you interact with a human, the next time the human might do a better job as a result. Interacting with a text someone copied from a machine output is useless.
(WTF is even “For the sake of genuine discourse, we need to prioritize human connection over algorithmic convenience”? What does “algorithmic convenience” even mean? Generating LLM texts is convenient. Reading them, not really. Or does generating the texts feel convenient to the LLM? I don’t care.)
Thank you; I wanted to write something like this, but you made the point clearly and concisely.
Some people say they can clearly recognize the output of an LLM. I admit I can’t see that clearly. I just get an annoying feeling, something rubs me the wrong way, but I can’t quite put my finger on it. For example, while reading this article, I had a thought in mind “maybe this text was actually written by a human, and at the end the conclusion will be: haha, you failed at Turing test, now you see how biased you are”.
If I believed that the text was written by a human, I would probably be annoyed that the text is too verbose. But, you know, some real people are like that, too. I would also be like “I am not sure what point exactly are they trying to make… there seems to be a general topic they write about, but they just write their associations with the topic, instead of focusing on what is really important (for them). But again, actual people probably write like this all the time, ask any professional editor. Writing well is a skill that needs to be learned. I mean, the LLM was trained on human texts! The texts made by verbose people are probably over-represented in the corpus. So I would be like “dude, rewrite this shorter, make your points clearly, and remove the irrelevant parts”, but I could totally believe it was written by a human.
Also, the arguments introduced by the LLM are annoying, but those are arguments that actual people make. Some of them just feel out of place on LW. I care about whether a text is correct, not about whether it is authentic. If the LLM could generate a 100% reliable Theory of Everything, I wouldn’t mind that it is a product of artificial thinking; I would be happy to read it! What I hate is automatically generated human-like mistakes. I can forgive the actual humans, but why should I tolerate the same thing from a machine? If you interact with a human, the next time the human might do a better job as a result. Interacting with a text someone copied from a machine output is useless.
(WTF is even “For the sake of genuine discourse, we need to prioritize human connection over algorithmic convenience”? What does “algorithmic convenience” even mean? Generating LLM texts is convenient. Reading them, not really. Or does generating the texts feel convenient to the LLM? I don’t care.)