(Note that LLM-written or edited comments are not looked on too kindly on LW2, unless they are making a point, and if you are doing it as a joke, it is likely to backfire.)
It is suspicious because it reeks so heavily of ChatGPTese that it suggests the human may have had little or no input and put no effort into it; ‘fixing typos’ is entirely unobjectionable… and doesn’t produce a comment that looks pure ChatGPTese, down to ‘delves’ and ‘intricacies’ and ‘highlights’ etc.* (Which also means that it could contain confabulations. I’ve called out comments here before for just copying ChatGPT output which contained confabulations, and which the author should’ve known to check because the assertions were implausible. EDIT: another example, apparently)
I almost flagged it for spam before I checked the account and saw that it looked like an unusually old account, and had a few legit-looking comments, and was probably a person who didn’t realize just how bad the comment looks. It’s not necessarily something people will go out on a limb to take the risk of telling you, any more than they will necessarily tell you your fly is down or you have BO, rather than downvote/spam and move on.
* you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
I agree with this, my editing process is to get the new output, view the diff and then copy in changes one-by-one if they seem like a good change.
(My model is also that limited text-editing where it’s clear that you actually read every sentence and feel comfortable endorsing everything written, is OK. The above comment does seem more like a quick copy-paste in a way I do think is bad for LW)
I rephrased it to provide a hint about the paper’s content, making it easier for others to locate and read the original work. I am very familiar with the paper and have studied it to some extent. I didn’t want to share ‘what I think,’ hoping smarter people can make more out of it by reading the whole thing, as I am not active here and don’t care about upvotes. However, I was uncertain about the legality of sharing the entire paper or any excerpts/data since I don’t hold the rights to it.
I find the reaction quite disappointing because the paper addresses very relevant ‘real-life’ versions of the points mentioned here through fiction and encourages a broader understanding of the history of poverty beyond Western perspectives, as we do here. I was curious if anyone here has actually read it; discussing poverty without examining its underlying principles and issues in the historical data, as the paper does, seems contradictory for this community.
Instead of engaging with the paper, my gpt-rephrasing to provide context was met with heavy downvotes, dismissal and ‘hidden comment’. I’m curious about the rationale behind this approach. Wouldn’t it be more valuable to keep the comment open and ensure that this important research remains accessible, even if it includes a rephrasing?
By downvoting and hiding my comment, the community risks overlooking significant insights that could contribute to our understanding of poverty. Isn’t it more beneficial to prioritize the research and promote such relevant work?
(Note that LLM-written or edited comments are not looked on too kindly on LW2, unless they are making a point, and if you are doing it as a joke, it is likely to backfire.)
Huh? Why is it problematic to have your text edited by an LM? I do this all the time for minor clarity improvements and to fix typos.
I agree that it would be rude to have a part or all of the comment be totally LM written without flagging this.
In this case, seems plausible that the person asked gpt-4 to just summarize the paper and then pasted that in.
It is suspicious because it reeks so heavily of ChatGPTese that it suggests the human may have had little or no input and put no effort into it; ‘fixing typos’ is entirely unobjectionable… and doesn’t produce a comment that looks pure ChatGPTese, down to ‘delves’ and ‘intricacies’ and ‘highlights’ etc.* (Which also means that it could contain confabulations. I’ve called out comments here before for just copying ChatGPT output which contained confabulations, and which the author should’ve known to check because the assertions were implausible. EDIT: another example, apparently)
I almost flagged it for spam before I checked the account and saw that it looked like an unusually old account, and had a few legit-looking comments, and was probably a person who didn’t realize just how bad the comment looks. It’s not necessarily something people will go out on a limb to take the risk of telling you, any more than they will necessarily tell you your fly is down or you have BO, rather than downvote/spam and move on.
* you should also be wary of ‘minor clarity improvements’ suggested by ChatGPT/Claude. I find a lot of them make prose worse, especially if you apply most of them so the gestalt becomes ChatGPTese.
I agree with this, my editing process is to get the new output, view the diff and then copy in changes one-by-one if they seem like a good change.
Another word for ChatGPTese is “slop”, per https://simonwillison.net/2024/May/8/slop/.
(My model is also that limited text-editing where it’s clear that you actually read every sentence and feel comfortable endorsing everything written, is OK. The above comment does seem more like a quick copy-paste in a way I do think is bad for LW)
I rephrased it to provide a hint about the paper’s content, making it easier for others to locate and read the original work. I am very familiar with the paper and have studied it to some extent. I didn’t want to share ‘what I think,’ hoping smarter people can make more out of it by reading the whole thing, as I am not active here and don’t care about upvotes. However, I was uncertain about the legality of sharing the entire paper or any excerpts/data since I don’t hold the rights to it.
I find the reaction quite disappointing because the paper addresses very relevant ‘real-life’ versions of the points mentioned here through fiction and encourages a broader understanding of the history of poverty beyond Western perspectives, as we do here. I was curious if anyone here has actually read it; discussing poverty without examining its underlying principles and issues in the historical data, as the paper does, seems contradictory for this community.
Instead of engaging with the paper, my gpt-rephrasing to provide context was met with heavy downvotes, dismissal and ‘hidden comment’. I’m curious about the rationale behind this approach. Wouldn’t it be more valuable to keep the comment open and ensure that this important research remains accessible, even if it includes a rephrasing?
By downvoting and hiding my comment, the community risks overlooking significant insights that could contribute to our understanding of poverty. Isn’t it more beneficial to prioritize the research and promote such relevant work?