LessWrong is the central discussion space for the Rationalist subculture. The Rationalists write extensively about what they expect from the world, so I in turn have expectations from them. Each point is my expectation vs what I see.
Track Records and Accountability
I imagine systems where you could hover over someone’s username to see their prediction track record. Perhaps there would be prediction market participation stats or calibration scores displayed prominently. High status could be tied to demonstrated good judgment through special user flair for accurate forecasters or annual prediction competitions.
I do not see this. There is karma, which is costly, but not that hard to game by writing lots of comments and articles. Likewise there is the yearly review (going currently) which looks back on older posts and judges whether they have stood the test of time.
Neither of these are a clear track record for me. I chatted to Oliver Habryka and he said “I think contributing considerations is usually much more valuable than making predictions.” As I understand it , he sees getting top contributors to build forecasting track records as less useful compared to careful writing and the reading of such writing. I see forecasting as a useful focus and a clearer way of ranking individuals.
Where do you fall?
Anonymous ranking mechanisms
I expect LessWrong to have a system for anonymously upvoting replies. Users could signal disagreement or point out errors without social cost. Sometimes it feels very expensive to write a disagreeing comment, but an up/downvote is very cheap.
This is one area where reality has matched or exceeded expectations. The LessWrong implementation—separating agreement voting from quality voting—is quite elegant and effective. The emojis that you can add to posts are great too. I find LessWrong a pleasure to comment on in this regard, and often wish Twitter or Substack had similar features.
Experimental Culture
My vision of LessWrong includes a thriving culture of hands-on experimentation and empirical investigation. Regular posts about chemistry experiments, engineering projects, and systematic data collection about important life decisions. Given the trust within the community, coordinated efforts to gather useful data through surveys about career outcomes, child-raising approaches, or life satisfaction in different cities seems natural.
Instead of the hands-on experimentation I expected, what I see is a culture heavily focused on long-form theoretical posts. People write extensive pieces filtered through their own models and frameworks. While these can be valuable, they seem to crowd out the more empirical, experimental content I expected.
I think it would be healthier for rationalists to do experiments at home, write short posts about personal experiences, make falsifiable judgements about geopolitics.
The platform could feature sophisticated argument-mapping software Some people claim I am obsessed with argument mapping. I am not sure they are wrong. and tools for synthesising perspectives across multiple posts. Visual representations of debate structures, methods to track how positions evolve over time, and systems for identifying key points of agreement and disagreement would be standard features. These tools would facilitate building toward shared understanding rather than just collecting individual perspectives.
At present most discussions remain in traditional comment threads, making it difficult to track the evolution of ideas or find points of consensus across multiple posts. The wiki is poorly populated and not good for understanding the flow of overall discussion.
Dialogues were an interesting test but they don’t seem to have worked.
AI, Crypto, and X-Risk Content
The platform could host comprehensive coverage of existential risks, emerging technologies, and complex coordination challenges. This would encompass detailed analysis of AI development trajectories, cryptocurrency governance mechanisms, and other technological risks. Regular updates and careful tracking of developments in these fields would keep the community informed and engaged.
This is another area where reality somewhat matches expectations. There’s extensive discussion of AI safety, existential risks, and emerging technologies. However, the focus can sometimes feel narrow, with certain topics (like AI alignment) receiving intense attention while others (like biosecurity or environmental risks) get less coverage. I would like to see more discussion of geopolitics.
Unexpected Successes
While I’ve focused on gaps between expectations and reality, there are also areas where the rationalist community has shown surprising strengths I hadn’t anticipated:
Moderation
In some way, moderation seems much better than I expect. I more rarely disagree with LessWrong moderation decisions than the EA forum and LessWrong seems to be consumed by drama less often. Often when drama is consuming the EA forum, LessWrong seems.. fine. It is neither annoying politically correct, nor endlessly edgy, nor full of racism. This is an impressive achievement.
In particular, I like rate limiting as a tool. If people want to stay engaged they can, but there is an incentive for them to fix their behaviour, rather than disappear and then come back when the ban is over, as bad as ever.
Community Space Management
The Lighthaven campus has been run remarkably well, which wasn’t something I would have predicted. I dint’ think there was a particular reason to think rationalists would excel at managing physical spaces, yet they’ve created a lovely conference space.
Props!
Platform Aesthetics
LessWrong’s design aesthetic is also not something I expected. I like the layout, how they try to keep the screen uncluttered and the AI art. The Best of LessWrong page is quite beautiful.
Why These Gaps Matter
Rationalists are big on meaning what they say. If they mean what they say in the sequences I would like more track records, more contact with reality in the things the community writes about and better ways of having discussions than longform.
What do you think?
Have I judged fairly? What have I missed or got wrong? Why is LessWrong like this, do you think?
Some people claim I am obsessed with argument mapping. I am not sure they are wrong. Somehow it seems so obvious to me that it’s a thing I want. How do people disagree and where does their disagreement flow from?
What I expected from this site: A LessWrong review
Link post
LessWrong is the central discussion space for the Rationalist subculture. The Rationalists write extensively about what they expect from the world, so I in turn have expectations from them. Each point is my expectation vs what I see.
Track Records and Accountability
I imagine systems where you could hover over someone’s username to see their prediction track record. Perhaps there would be prediction market participation stats or calibration scores displayed prominently. High status could be tied to demonstrated good judgment through special user flair for accurate forecasters or annual prediction competitions.
I do not see this. There is karma, which is costly, but not that hard to game by writing lots of comments and articles. Likewise there is the yearly review (going currently) which looks back on older posts and judges whether they have stood the test of time.
Neither of these are a clear track record for me. I chatted to Oliver Habryka and he said “I think contributing considerations is usually much more valuable than making predictions.” As I understand it , he sees getting top contributors to build forecasting track records as less useful compared to careful writing and the reading of such writing. I see forecasting as a useful focus and a clearer way of ranking individuals.
Where do you fall?
Anonymous ranking mechanisms
I expect LessWrong to have a system for anonymously upvoting replies. Users could signal disagreement or point out errors without social cost. Sometimes it feels very expensive to write a disagreeing comment, but an up/downvote is very cheap.
This is one area where reality has matched or exceeded expectations. The LessWrong implementation—separating agreement voting from quality voting—is quite elegant and effective. The emojis that you can add to posts are great too. I find LessWrong a pleasure to comment on in this regard, and often wish Twitter or Substack had similar features.
Experimental Culture
My vision of LessWrong includes a thriving culture of hands-on experimentation and empirical investigation. Regular posts about chemistry experiments, engineering projects, and systematic data collection about important life decisions. Given the trust within the community, coordinated efforts to gather useful data through surveys about career outcomes, child-raising approaches, or life satisfaction in different cities seems natural.
Instead of the hands-on experimentation I expected, what I see is a culture heavily focused on long-form theoretical posts. People write extensive pieces filtered through their own models and frameworks. While these can be valuable, they seem to crowd out the more empirical, experimental content I expected.
I think it would be healthier for rationalists to do experiments at home, write short posts about personal experiences, make falsifiable judgements about geopolitics.
There are some notable successes here, like that group trying to do their own vaccine work, but that feels like the minority to me.
Consensus-Building Tools
The platform could feature sophisticated argument-mapping software
Some people claim I am obsessed with argument mapping. I am not sure they are wrong. and tools for synthesising perspectives across multiple posts. Visual representations of debate structures, methods to track how positions evolve over time, and systems for identifying key points of agreement and disagreement would be standard features. These tools would facilitate building toward shared understanding rather than just collecting individual perspectives.
At present most discussions remain in traditional comment threads, making it difficult to track the evolution of ideas or find points of consensus across multiple posts. The wiki is poorly populated and not good for understanding the flow of overall discussion.
Dialogues were an interesting test but they don’t seem to have worked.
AI, Crypto, and X-Risk Content
The platform could host comprehensive coverage of existential risks, emerging technologies, and complex coordination challenges. This would encompass detailed analysis of AI development trajectories, cryptocurrency governance mechanisms, and other technological risks. Regular updates and careful tracking of developments in these fields would keep the community informed and engaged.
This is another area where reality somewhat matches expectations. There’s extensive discussion of AI safety, existential risks, and emerging technologies. However, the focus can sometimes feel narrow, with certain topics (like AI alignment) receiving intense attention while others (like biosecurity or environmental risks) get less coverage. I would like to see more discussion of geopolitics.
Unexpected Successes
While I’ve focused on gaps between expectations and reality, there are also areas where the rationalist community has shown surprising strengths I hadn’t anticipated:
Moderation
In some way, moderation seems much better than I expect. I more rarely disagree with LessWrong moderation decisions than the EA forum and LessWrong seems to be consumed by drama less often. Often when drama is consuming the EA forum, LessWrong seems.. fine. It is neither annoying politically correct, nor endlessly edgy, nor full of racism. This is an impressive achievement.
In particular, I like rate limiting as a tool. If people want to stay engaged they can, but there is an incentive for them to fix their behaviour, rather than disappear and then come back when the ban is over, as bad as ever.
Community Space Management
The Lighthaven campus has been run remarkably well, which wasn’t something I would have predicted. I dint’ think there was a particular reason to think rationalists would excel at managing physical spaces, yet they’ve created a lovely conference space.
Props!
Platform Aesthetics
LessWrong’s design aesthetic is also not something I expected. I like the layout, how they try to keep the screen uncluttered and the AI art. The Best of LessWrong page is quite beautiful.
Why These Gaps Matter
Rationalists are big on meaning what they say. If they mean what they say in the sequences I would like more track records, more contact with reality in the things the community writes about and better ways of having discussions than longform.
What do you think?
Have I judged fairly? What have I missed or got wrong? Why is LessWrong like this, do you think?
Some people claim I am obsessed with argument mapping. I am not sure they are wrong. Somehow it seems so obvious to me that it’s a thing I want. How do people disagree and where does their disagreement flow from?