It is suspicious because it reeks so heavily of ChatGPTese that it suggests the human may have had little or no input and put no effort into it; ‘fixing typos’ is entirely unobjectionable… and doesn’t produce a comment that looks pure ChatGPTese, down to ‘delves’ and ‘intricacies’ and ‘highlights’ etc.* (Which also means that it could contain confabulations. I’ve called out comments here before for just copying ChatGPT output which contained confabulations, and which the author should’ve known to check because the assertions were implausible. EDIT: another example, apparently)
I almost flagged it for spam before I checked the account and saw that it looked like an unusually old account, and had a few legit-looking comments, and was probably a person who didn’t realize just how bad the comment looks. It’s not necessarily something people will go out on a limb to take the risk of telling you, any more than they will necessarily tell you your fly is down or you have BO, rather than downvote/spam and move on.
* you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
I agree with this, my editing process is to get the new output, view the diff and then copy in changes one-by-one if they seem like a good change.
(My model is also that limited text-editing where it’s clear that you actually read every sentence and feel comfortable endorsing everything written, is OK. The above comment does seem more like a quick copy-paste in a way I do think is bad for LW)
Huh? Why is it problematic to have your text edited by an LM? I do this all the time for minor clarity improvements and to fix typos.
I agree that it would be rude to have a part or all of the comment be totally LM written without flagging this.
In this case, seems plausible that the person asked gpt-4 to just summarize the paper and then pasted that in.
It is suspicious because it reeks so heavily of ChatGPTese that it suggests the human may have had little or no input and put no effort into it; ‘fixing typos’ is entirely unobjectionable… and doesn’t produce a comment that looks pure ChatGPTese, down to ‘delves’ and ‘intricacies’ and ‘highlights’ etc.* (Which also means that it could contain confabulations. I’ve called out comments here before for just copying ChatGPT output which contained confabulations, and which the author should’ve known to check because the assertions were implausible. EDIT: another example, apparently)
I almost flagged it for spam before I checked the account and saw that it looked like an unusually old account, and had a few legit-looking comments, and was probably a person who didn’t realize just how bad the comment looks. It’s not necessarily something people will go out on a limb to take the risk of telling you, any more than they will necessarily tell you your fly is down or you have BO, rather than downvote/spam and move on.
* you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
I agree with this, my editing process is to get the new output, view the diff and then copy in changes one-by-one if they seem like a good change.
Another word for ChatGPTese is “slop”, per https://simonwillison.net/2024/May/8/slop/.
(My model is also that limited text-editing where it’s clear that you actually read every sentence and feel comfortable endorsing everything written, is OK. The above comment does seem more like a quick copy-paste in a way I do think is bad for LW)