And also, I do not personally want to be running into any writing that AI had a hand in.
(My guess is the majority of posts written daily on LW are now written with some AI involvement. My best guess is most authors on LessWrong use AI models on a daily level, asking factual questions, and probably also asking for some amount of editing and writing feedback. As such, I don’t think this is a coherent ask.)
If this is true, then it’s a damning indictment of Less Wrong and the authors who post here, and is an excellent reason not to read anything written here.
Here are all of my interactions with claude related to writing blog posts or comments in the last four days:
I asked Claude for a couple back-of-the-envelope power output estimations (running, and scratching one’s nose). I double-checked the results for myself before alluding to them in the (upcoming) post. Claude’s suggestions were generally in the right ballpark, but more importantly Claude helpfully reminded me that metabolic power consumption = mechanical power + heat production, and that I should be clear on which one I mean.
“There are two unrelated senses of “energy conservation”, one being physics, the other being “I want to conserve my energy for later”. Is there some different term I can use for the latter?” — Claude had a couple good suggestions; I think I wound up going with “energy preservation”.
“how many centimeters separate the preoptic nucleus of the hypothalamus from the arcuate nucleus?” — Claude didn’t really know but its ballpark number was consistent with what I would have guessed. I think I also googled, and then just to be safe I worded the claim in a pretty vague way. It didn’t really matter much for my larger point in even that one sentence, let alone for the important points in the whole (upcoming) post.
“what’s a typical amount that a 4yo can pick up? what about a national champion weightlifter? I’m interested in the ratio.” — Claude gave an answer and showed its work. Seemed plausible. I was writing this comment, and after reading Claude’s guess I changed a number from “500” to “50”.
“Are there characteristic auditory properties that distinguish the sound of someone talking to me while facing me, versus talking to me while facing a different direction?” — Claude said some things that were marginally helpful. I didn’t wind up saying anything about that in the (upcoming) post.
“what does “receiving eye contact” mean?” — I was trying to figure out if readers would understand what I mean if I wrote that in my (upcoming) post. I thought it was a standard term but had a niggling worry that I had made it up. Claude got the right answer, so I felt marginally more comfortable using that phrase without defining it.
“what’s the name for the psychotic delusion where you’re surprised by motor actions?” — I had a particular thing in mind, but was blanking on the exact word. Claude was pretty confused but after a couple tries it mentioned “delusion of control”, which is what I wanted. (I googled that term afterwards.)
Somewhat following this up: I think not using LLMs is going to be fairly similar to “not using google.” Google results are not automatically true – you have to use your judgment. But, like, it’s kinda silly to not use it as part of your search process.
I do recommend perplexity.ai for people who want an easier time checking up on where the AI got some info (it does a search first and provides citations, while packaging the results in a clearer overall explanation than google)
I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.
(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)
Do you not use LLMs daily? I don’t currently find them out-of-the-box useful for editing, but find them useful for a huge variety of tasks related to writing things.
I think it would be more of an indictment of LessWrong if people somehow didn’t use them, they obviously increase my productivity at a wide variety of tasks, and being an early-adopter of powerful AI technologies seems like one of the things that I hope LessWrong authors excell at.
In general, I think Gwern’s suggested LLM policy seems roughly right to me. Of course people should use LLMs extensively in their writing, but if they do, they really have to read any LLM writing that makes it into their post and check what it says is true:
I am also fine with use of AI in general to make us better writers and thinkers, and I am still excited about this. (We unfortunately have not seen much benefit for the highest-quality creative nonfiction/fiction or research, like we aspire to on LW2, but this is in considerable part due to technical choices & historical contingency, which I’ve discussed many times before, and I still believe in the fundamental possibilities there.) We definitely shouldn’t be trying to ban AI use per se.
However, if someone is posting a GPT-4 (or Claude or Llama) sample which is just a response, then they had damn well better have checked it and made sure that the references existed and said what the sample says they said and that the sample makes sense and they fixed any issues in it. If they wrote something and had the LLM edit it, then they should have checked those edits and made sure the edits are in fact improvements, and improved the improvements, instead of letting their essay degrade into ChatGPTese. And so on.
Seems like a mistake! Agree it’s not uncommon to use them less, though my guess (with like 60% confidence) is that the majority of authors on LW use them daily, or very close to daily.
Prolly less than 60%. I think you’re overestimating how LLM-pilled the overall LW userbase is (even filtering for people who publish posts). But, my guess is like 25-45% tho.
I would strongly bet against majority using AI tools ~daily (off the top of my head: <40% with 80% confidence?): adoption of any new tool is just much slower than people would predict, plus the LW team is liable to vastly overpredict this since you’re from California.
That said, there are some difficulties with how to operationalize this question, e.g. I know some particularly prolific LW posters (like Zvi) use AI.
I also use them rarely, fwiw. Maybe I’m missing some more productive use, but I’ve experimented a decent amount and have yet to find a way to make regular use even neutral (much less helpful) for my thinking or writing.
I just added “LLM Frequency” and “LLM Use case” to the survey, under LessWrong Team Questions. I’ll probably tweak the options and might move it to Bonus Questions later. Suggestions welcome!
First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?
Secondly, when Gwern says that he is “fine with use of AI in general to make us better writers and thinkers” and that he is “still excited about this”, you should understand that he is talking about stuff like this and this, and not about stuff like “instead of thinking about things, refining my ideas, and writing them down, I just asked a LLM to write a post for me”.
Approximately zero percent of the people who read Gwern’s comment will think of the former sort of idea (it takes a Gwern to think of such things, and those are in very limited supply), rather than the latter.
The policy of “encourage the use of AI for writing posts/comments here, and provide tools to easily generate more AI-written crap” doesn’t lead to more of the sort of thing that Gwern describes at the above links. It leads to a deluge of un-checked crap.
I currently wish I had a policy for knowing with confidence whether a user wrote part of their post with a language model. There’s a (small) regular stream of new-user content that I look through, where I’m above 50% that AI wrote some of it (very formulaic, unoriginal writing, imitating academic style) but I am worried about being rude when saying “I rejected your first post because I reckon you didn’t write this and it doesn’t reflect your thoughts” if I end up being wrong like 1 in 3 times[1].
Sometimes I use various online language-model checkers (1, 2, 3), but I don’t know how accurate/reliable they are. If they are actually pretty good, I may well automatically run them on all submitted posts to LW so I can be more confident.
Also one time I pushed back on this and the user explained they’re not a native English speaker, so tried to use a model to improve their English, which I thought was more reasonable than many uses.
I’d be pretty into having typography styling settings that auto-detect LM stuff (or, specifically track when users have used any LW-specific LM tools), and flag it with some kind of style difference so it’s easy to track at a glance (esp if it could be pretty reliable).
First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?
My guess is very few people are using AI output directly (at least the present it’s pretty obvious as their writing is kind of atrocious). I do think most posts probably involved people talking to an LLM through their thoughts, or ask for some editing help, or ask some factual questions. My guess is basically 100% of those went through the kind of process that Gwern was describing here.
Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.
That doesn’t mean LLMs don’t have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that’s important feedback.
(My guess is the majority of posts written daily on LW are now written with some AI involvement. My best guess is most authors on LessWrong use AI models on a daily level, asking factual questions, and probably also asking for some amount of editing and writing feedback. As such, I don’t think this is a coherent ask.)
If this is true, then it’s a damning indictment of Less Wrong and the authors who post here, and is an excellent reason not to read anything written here.
Here are all of my interactions with claude related to writing blog posts or comments in the last four days:
I asked Claude for a couple back-of-the-envelope power output estimations (running, and scratching one’s nose). I double-checked the results for myself before alluding to them in the (upcoming) post. Claude’s suggestions were generally in the right ballpark, but more importantly Claude helpfully reminded me that metabolic power consumption = mechanical power + heat production, and that I should be clear on which one I mean.
“There are two unrelated senses of “energy conservation”, one being physics, the other being “I want to conserve my energy for later”. Is there some different term I can use for the latter?” — Claude had a couple good suggestions; I think I wound up going with “energy preservation”.
“how many centimeters separate the preoptic nucleus of the hypothalamus from the arcuate nucleus?” — Claude didn’t really know but its ballpark number was consistent with what I would have guessed. I think I also googled, and then just to be safe I worded the claim in a pretty vague way. It didn’t really matter much for my larger point in even that one sentence, let alone for the important points in the whole (upcoming) post.
“what’s a typical amount that a 4yo can pick up? what about a national champion weightlifter? I’m interested in the ratio.” — Claude gave an answer and showed its work. Seemed plausible. I was writing this comment, and after reading Claude’s guess I changed a number from “500” to “50”.
“Are there characteristic auditory properties that distinguish the sound of someone talking to me while facing me, versus talking to me while facing a different direction?” — Claude said some things that were marginally helpful. I didn’t wind up saying anything about that in the (upcoming) post.
“what does “receiving eye contact” mean?” — I was trying to figure out if readers would understand what I mean if I wrote that in my (upcoming) post. I thought it was a standard term but had a niggling worry that I had made it up. Claude got the right answer, so I felt marginally more comfortable using that phrase without defining it.
“what’s the name for the psychotic delusion where you’re surprised by motor actions?” — I had a particular thing in mind, but was blanking on the exact word. Claude was pretty confused but after a couple tries it mentioned “delusion of control”, which is what I wanted. (I googled that term afterwards.)
Somewhat following this up: I think not using LLMs is going to be fairly similar to “not using google.” Google results are not automatically true – you have to use your judgment. But, like, it’s kinda silly to not use it as part of your search process.
I do recommend perplexity.ai for people who want an easier time checking up on where the AI got some info (it does a search first and provides citations, while packaging the results in a clearer overall explanation than google)
I in fact don’t use Google very much these days, and don’t particularly recommend that anyone else do so, either.
(If by “google” you meant “search engines in general”, then that’s a bit different, of course. But then, the analogy here would be to something like “carefully select which LLM products you use, try to minimize their use, avoid the popular ones, and otherwise take all possible steps to ensure that LLMs affect what you see and do as little as possible”.)
Do you not use LLMs daily? I don’t currently find them out-of-the-box useful for editing, but find them useful for a huge variety of tasks related to writing things.
I think it would be more of an indictment of LessWrong if people somehow didn’t use them, they obviously increase my productivity at a wide variety of tasks, and being an early-adopter of powerful AI technologies seems like one of the things that I hope LessWrong authors excell at.
In general, I think Gwern’s suggested LLM policy seems roughly right to me. Of course people should use LLMs extensively in their writing, but if they do, they really have to read any LLM writing that makes it into their post and check what it says is true:
FWIW I think it’s not uncommon for people to not use LLMs daily (e.g. I don’t).
Seems like a mistake! Agree it’s not uncommon to use them less, though my guess (with like 60% confidence) is that the majority of authors on LW use them daily, or very close to daily.
Consider the reaction my comment from three months ago got.
Prolly less than 60%. I think you’re overestimating how LLM-pilled the overall LW userbase is (even filtering for people who publish posts). But, my guess is like 25-45% tho.
I would strongly bet against majority using AI tools ~daily (off the top of my head: <40% with 80% confidence?): adoption of any new tool is just much slower than people would predict, plus the LW team is liable to vastly overpredict this since you’re from California.
That said, there are some difficulties with how to operationalize this question, e.g. I know some particularly prolific LW posters (like Zvi) use AI.
I also use them rarely, fwiw. Maybe I’m missing some more productive use, but I’ve experimented a decent amount and have yet to find a way to make regular use even neutral (much less helpful) for my thinking or writing.
I enjoyed reading Nicholas Carlini and Jeff Kaufman write about how they use them, if you’re looking for inspiration.
Thanks; it makes sense that use cases like these would benefit, I just rarely have similar ones when thinking or writing.
I recommend having this question in the next lesswrong survey.
Along the lines of “How often do you use LLMs and your usecase?”
Great idea!
@Screwtape?
On it!
I just added “LLM Frequency” and “LLM Use case” to the survey, under LessWrong Team Questions. I’ll probably tweak the options and might move it to Bonus Questions later. Suggestions welcome!
Not even once.
First of all, even taking what Gwern says there at face value, how many of the posts here that are written “with AI involvement” would you say actually are checked, edited, etc., in the rigorous way which Gwern describes? Realistically?
Secondly, when Gwern says that he is “fine with use of AI in general to make us better writers and thinkers” and that he is “still excited about this”, you should understand that he is talking about stuff like this and this, and not about stuff like “instead of thinking about things, refining my ideas, and writing them down, I just asked a LLM to write a post for me”.
Approximately zero percent of the people who read Gwern’s comment will think of the former sort of idea (it takes a Gwern to think of such things, and those are in very limited supply), rather than the latter.
The policy of “encourage the use of AI for writing posts/comments here, and provide tools to easily generate more AI-written crap” doesn’t lead to more of the sort of thing that Gwern describes at the above links. It leads to a deluge of un-checked crap.
I currently wish I had a policy for knowing with confidence whether a user wrote part of their post with a language model. There’s a (small) regular stream of new-user content that I look through, where I’m above 50% that AI wrote some of it (very formulaic, unoriginal writing, imitating academic style) but I am worried about being rude when saying “I rejected your first post because I reckon you didn’t write this and it doesn’t reflect your thoughts” if I end up being wrong like 1 in 3 times[1].
Sometimes I use various online language-model checkers (1, 2, 3), but I don’t know how accurate/reliable they are. If they are actually pretty good, I may well automatically run them on all submitted posts to LW so I can be more confident.
Also one time I pushed back on this and the user explained they’re not a native English speaker, so tried to use a model to improve their English, which I thought was more reasonable than many uses.
I’d be pretty into having typography styling settings that auto-detect LM stuff (or, specifically track when users have used any LW-specific LM tools), and flag it with some kind of style difference so it’s easy to track at a glance (esp if it could be pretty reliable).
My guess is very few people are using AI output directly (at least the present it’s pretty obvious as their writing is kind of atrocious). I do think most posts probably involved people talking to an LLM through their thoughts, or ask for some editing help, or ask some factual questions. My guess is basically 100% of those went through the kind of process that Gwern was describing here.
Lots of people are pushing back on this, but I do want to say explicitly that I agree that raw LLM-produced text is mostly not up to LW standards, and that the writing style that current-gen LLMs produce by default sucks. In the new-user-posting-for-the-first-time moderation queue, next to the SEO spam, we do see some essays that look like raw LLM output, and we reject these.
That doesn’t mean LLMs don’t have good use around the edges. In the case of defining commonly-used jargon, there is no need for insight or originality, the task is search-engine-adjacent, and so I think LLMs have a role there. That said, if the glossary content is coming out bad in practice, that’s important feedback.