LessWrong team member / moderator. I’ve been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I’ve been interested in improving my own epistemic standards and helping others to do so as well.
Raemon
I agree it probably shouldn’t have been negative karma (I think that’s due to some partisan voting around being annoyed at vegans), and that there were some interesting points there and some interesting discussion. But, the fact that it prompted a bunch of rebuttals isn’t particularly good arguments that it should have got more karma – if a bad argument is popular and people need to write rebuttals, that’s not a point in it’s favor.
I think it’s legitimately not-deserving-high-upvotes because it makes a very strong claim about what people should do, based on some very flimsy core arguments.
Some of it seems bad to roughly the same degree you thought phones were bad, tho?
The part where they 50% of them write basically the same essay seems more like the LLMs have an attractor state they funnel them towards.
It’s not surprising (and seems reasonable) for LLM-chats that feature AI stuff to end up getting recommended LessWrong. The surprising/alarming thing is how they generate the same confused delusional story.
Huh, the crosspost is coming from Zvi’s wordpress blog which looks different. https://thezvi.wordpress.com/2025/07/08/balsa-update-springtime-in-dc/
But, I just copy-pasted the substack version in.
RobertM had made this table for another discussion on this topic, it looks like the actual average is maybe more like “8, as of last month”, although on a noticeable uptick.
You can see that the average used to be < 1.
I’m slightly confused about this because the number of users we have to process each morning is consistently more like 30 and I feel like we reject more than half and probably more than 3⁄4 for being LLM slop, but that might be conflating some clusters of users, as well as “it’s annoying to do this task so we often put it off a bit and that results in them bunching up.” (although it’s pretty common to see numbers more like 60)
[edit: Robert reminds me this doesn’t include comments, which was another 80 last month)
Again you can look at https://www.lesswrong.com/moderation#rejected-posts to see the actual content and verify numbers/quality for yourself.
We get like 10-20 new users a day who write a post describing themselves as a case-study of having discovered an emergent, recursive process while talking to LLMs. The writing generally looks AI generated. The evidence usually looks like, a sort of standard “prompt LLM into roleplaying an emergently aware AI”.
It’d be kinda nice if there was a canonical post specifically talking them out of their delusional state.
If anyone feels like taking a stab at that, you can look at the Rejected Section (https://www.lesswrong.com/moderation#rejected-posts) to see what sort of stuff they usually write.
They felt to me like “comments that were theoretically fine, but they had the smell of ‘the first very slight drama-escalation that tends to lead to Demon Threads’”.
Mod note: I get the sense that some commenters here are bringing a kind of… naive political partisanship background vibe? (mostly not too overt, but it felt off enough I felt the need to comment). I don’t have a specific request, but, make sure to read the LW Political Prerequisites sequence and I recommend trying to steer towards “figure out useful new things” or at least have the most productive version of the conversation you’re trying to have.
(that doesn’t mean there won’t/shouldn’t be major frame disagreements or political fights here, but, like, lean away from drama on the margin)
I think the original just also had very large paragraphs and not-actual-footnotes
I do sure wish that abstract was either Actually Short™, or broken into paragraphs. (I’m assuming you didn’t write it but it’s usually easy to find natural paragraph breaks on the authors’ behalf)
(hurray for thoughtful downvote explanations)
I don’t think this post is trying to hide Nate’s identity, he’s just using his longstanding LessWrong account. Evidence: his name’s on the book cover!
I think this is actually already part of the LessWrong-style-rationalist zeitgeist. Taste, aesthetics, focusing and belief reporting are some keywords to look at.
(I think this post also seems to not understand what LessWrong’s conception of rationality is about, although I’m not 100% sure what you’re assuming about it. Vlad’s comment seems like a good starting point for that)
He wrote a followup comment explaining here.
@benwr oh I guess I did very specifically include a longer timescale example, so, uh, whoops. I do think there are fairly different flavors to the shorter term and longer term ones.
mm, okay yeah the distinction of different-ways-to-cling-less seems pretty reasonable.
Nod, those all seem like good moves.
I’m sort of torn between two more directions:
On one hand, I actually didn’t really mean “buckle up” to be very specific in terms of what move comes next. The most important thing is recognizing “this is a hard problem, your easy-mode cognitive tools probably won’t work.”
(I think all the moves you list there are totally valid tools to bring to bear in that context, which are all more strategic than “just try the next intuitive thing”)
On the other hand… the OP does sure have a vibe about particular flavors of problem/solution, and it’s not an accident that I wrote a thing that resonates with me with that you feel wary of.
But, leaning into that… I’m a bit confused why the options you list here are “last resorts” as opposed to “the first thing you try once noticing the problem is hard”. Like the airbender should be looking for a way for it to feel fun pretty early in the process. The “last resort” is whatever comes after all the tools that came more naturally to the airbender turn out not to work. (Which is in fact how Aang learned Earthbending).
((notably, I think I spent most of my life more airbendery. And the past ~2 years of me focusing on techniques that involve annoying effort is that the non-annoying-brute-force-y techniques weren’t solving the problems I wanted to solve.))
But I think the first-hand is more the point – this is less about “the next steps will involve something annoying/powerthrough-y” and more “I should probably emotionally prepare for the possibility that the next steps will involve something annoying/power-through-y”
Also: the particular “buckle up” move I’m imagining is for things that are more like “1 to 16 hours of concentrated work”. For things that are like months or years of work, there’s some equivalent of “buckle up” but it’s enough of a different move I’d probably write a pretty different post about it.
(I had missed some of this stuff because I skimmed some of the post, which does update me on how bad it was. I think there is basically one interesting claim in the post “bees are actually noticeably more cognitively interesting than you probably thought, and this should have some kind of implication worth thinking about”. I think I find that more valuable than Oliver does, but not very confident about whether “one interesting point among a bunch of really bad argumentation” should be more like −2 to 3 karma or more like −10)