If AI-written or edited text is not being posted as AI-written/edited text samples, then it must be improved by a human before posting.
If someone is posting a GPT-4 sample as a response or example of “what would GPT-4 write here?”, that is totally legitimate and doesn’t need to be edited other than to put it in blockquotes etc; if it’s an exercise in “and the punchline is, an AI wrote this!”, well, that’s fine too, and readers will upvote/downvote as they find the exercise of value. These are not the problem. The problem is when people slip in AI stuff purely as an (inferior) substitute for their own work.
I am also fine with use of AI in general to make us better writers and thinkers, and I am still excited about this. (We unfortunately have not seen much benefit for the highest-quality creative nonfiction/fiction or research, like we aspire to on LW2, but this is in considerable part due to technical choices & historical contingency, which I’ve discussed many times before, and I still believe in the fundamental possibilities there.) We definitely shouldn’t be trying to ban AI use per se.
However, if someone is posting a GPT-4 (or Claude or Llama) sample which is just a response, then they had damn well better have checked it and made sure that the references existed and said what the sample says they said and that the sample makes sense and they fixed any issues in it. If they wrote something and had the LLM edit it, then they should have checked those edits and made sure the edits are in fact improvements, and improved the improvements, instead of letting their essay degrade into ChatGPTese. And so on.
Anything else pollutes the commons. Every comment here is a gift from the author, but it’s also a gift from the readers, which they make in good faith under the belief that the author tried to make the comment worthwhile & put in enough effort that it would be worth potentially many people reading it. It should never take the author much less effort to write a comment than the readers will take to read it (as is the case with spamming sections with LLM junk that the ‘author’ didn’t even read but merely skimmed and went ‘lgtm’, judging from cases that have been flagged here in the past). Because you know, bro, I am just as capable as you are of copying a comment into the neighboring ChatGPT or Claude tab and seeing what it says; I don’t need you doing that manually on LW2 and it doesn’t help me if I have to waste time reading it to realize that I was better off ignoring it because you are just going to paste in random average AI slop without any kind of improvement: filtering, critique, improvement, evaluation, commentary, fact-checking, editing, curation, comparison of LLMs...
Such comments are spam, plain and simple, indistinguishable from spammers karma-farming to flip an account: creating fake contributions to gain status in order to parasitize the community without giving anything in return. And should be treated as such: downvoted, and banned.
My suggestion for a LLM policy for LW2 might be:
If someone is posting a GPT-4 sample as a response or example of “what would GPT-4 write here?”, that is totally legitimate and doesn’t need to be edited other than to put it in blockquotes etc; if it’s an exercise in “and the punchline is, an AI wrote this!”, well, that’s fine too, and readers will upvote/downvote as they find the exercise of value. These are not the problem. The problem is when people slip in AI stuff purely as an (inferior) substitute for their own work.
I am also fine with use of AI in general to make us better writers and thinkers, and I am still excited about this. (We unfortunately have not seen much benefit for the highest-quality creative nonfiction/fiction or research, like we aspire to on LW2, but this is in considerable part due to technical choices & historical contingency, which I’ve discussed many times before, and I still believe in the fundamental possibilities there.) We definitely shouldn’t be trying to ban AI use per se.
However, if someone is posting a GPT-4 (or Claude or Llama) sample which is just a response, then they had damn well better have checked it and made sure that the references existed and said what the sample says they said and that the sample makes sense and they fixed any issues in it. If they wrote something and had the LLM edit it, then they should have checked those edits and made sure the edits are in fact improvements, and improved the improvements, instead of letting their essay degrade into ChatGPTese. And so on.
Anything else pollutes the commons. Every comment here is a gift from the author, but it’s also a gift from the readers, which they make in good faith under the belief that the author tried to make the comment worthwhile & put in enough effort that it would be worth potentially many people reading it. It should never take the author much less effort to write a comment than the readers will take to read it (as is the case with spamming sections with LLM junk that the ‘author’ didn’t even read but merely skimmed and went ‘lgtm’, judging from cases that have been flagged here in the past). Because you know, bro, I am just as capable as you are of copying a comment into the neighboring ChatGPT or Claude tab and seeing what it says; I don’t need you doing that manually on LW2 and it doesn’t help me if I have to waste time reading it to realize that I was better off ignoring it because you are just going to paste in random average AI slop without any kind of improvement: filtering, critique, improvement, evaluation, commentary, fact-checking, editing, curation, comparison of LLMs...
Such comments are spam, plain and simple, indistinguishable from spammers karma-farming to flip an account: creating fake contributions to gain status in order to parasitize the community without giving anything in return. And should be treated as such: downvoted, and banned.