The key is that HP&TMoR is read in “fun time”, while I believe most people see LW time as “work towards self-improvement” time. Ironic, but true for me and the friends I’ve polled, at least)
Why I read less wrong:
A. 23% Already read all the good web-comics today
B. 22% To discuss important ideas that aren’t being discussed anywhere else, eg friendly AI
C. 20% To show off, gain some name recognition, and meet interesting people
D. 19% To cooperate with others in analyzing rationality, behavior, ethics, and the future in a more rigorous way than is being done elsewhere
E. 10% To observe arguments between smart people, and get a sense for how smartness correlates with agreement, making stupid errors, and size and frequency of blind spots; or how it generalizes across domains
F. 6% Self-improvement
Item D is the most important to me, but LessWrong has not been very successful at it. EY rarely gives the posts that I think are important along those lines the coveted green button, nor does the LW readership vote them up highly.
I think that the most important purpose LW could serve would be to critically analyze the ideas EY has put forth, and discuss different possible paths to a better future. But, AFAIK, EY has not given the green button to any posts that look at his ideas critically. Most readers never see posts that don’t get the green button. So LW doesn’t serve that purpose well.
Self-improvement for me from LW does not usually come from the akrasia stuff. pjeby’s website is more interesting for that, at least what I’ve looked at so far. (I read “Everything I Needed To Know About Life, I Learned From Supervillains” yesterday, and recommend it.) It comes more in finding specific errors in my reasoning or holes in my understanding, and calibrating.
EY’s sequences and early posts are very different from the usual self-improvement stuff. I think people would benefit more from reading the sequences than from staying current on all the new posts (yet I do the latter instead of the former). I know people aren’t reading them, because he has some good posts (old ones, backdated to before LW existed; maybe they were imported from OB) with only a couple of upvotes.
Not that I know of. You can see that green-button posts have higher scores than the white-button posts; but you would expect that in any case; they’re supposed to be better posts.
I believe most people don’t see the white-button posts because
a) If you go to lesswrong.com, you don’t see the white-button posts
b) If you’re reading overcoming bias, it only links to the green-button posts
c) When a post gets a green button, the rate of upvoting increases dramatically, even if the article is several days old.
That sounds reasonable. When I go to lesswrong.com, I usually first look at “Recent Posts” and don’t care about the button color. But it is probably not generic. (I haven’t even known that greenness is awarded by EY.)
I know people aren’t reading them, because he has some good posts (old ones, backdated to before LW existed; maybe they were imported from OB) with only a couple of upvotes.
When I was initially reading through some of the sequences, I didn’t upvote them at all, and I continue to not upvote them even now.
Initially I didn’t notice the voting mechanism at all, because I hadn’t yet created an account. Then after I registered, I didn’t bother because EY already has a jillion points, and because those posts had already been green-lit so registering my approval wouldn’t have much effect.
I’ve done the same. When I stumbled upon lesswrong, it’s like I got sucked into a vortex. I just kept reading and reading and reading, until I essentially ran out of things to read. I didn’t care about voting or commenting, just reading. Then I realized there are other people here besides Eliezer and other posts too. I realized the community here is pretty interesting, so I decided to join in.
Well, here we come to the gap between “the stated intention of Less Wrong” and “what people actually use it for”. This is surely a big part of the resolution of the gap I pointed out. If people are not using LW to increase their own rationality, then it should be clearer about that. Perhaps it’s my misreading of “refining the art of human rationality”—I assumed that the goal was “by making humans more rational”, but if the goal is just to sit around and have a delightful intellectual wankfest about the deep nature of rationality in isolation from people’s execution of their brains and use of their abilities in real life, then the site is being consistent :).
But this doesn’t seem consistent with Eliezer’s claim “Rationalists win”. I’ve seen enough of life to know that winners spend time building their many different kinds of muscles, not chatting on web forums.
Why I read less wrong:
A. 23% Already read all the good web-comics today
B. 22% To discuss important ideas that aren’t being discussed anywhere else, eg friendly AI
C. 20% To show off, gain some name recognition, and meet interesting people
D. 19% To cooperate with others in analyzing rationality, behavior, ethics, and the future in a more rigorous way than is being done elsewhere
E. 10% To observe arguments between smart people, and get a sense for how smartness correlates with agreement, making stupid errors, and size and frequency of blind spots; or how it generalizes across domains
F. 6% Self-improvement
Item D is the most important to me, but LessWrong has not been very successful at it. EY rarely gives the posts that I think are important along those lines the coveted green button, nor does the LW readership vote them up highly.
I think that the most important purpose LW could serve would be to critically analyze the ideas EY has put forth, and discuss different possible paths to a better future. But, AFAIK, EY has not given the green button to any posts that look at his ideas critically. Most readers never see posts that don’t get the green button. So LW doesn’t serve that purpose well.
Self-improvement for me from LW does not usually come from the akrasia stuff. pjeby’s website is more interesting for that, at least what I’ve looked at so far. (I read “Everything I Needed To Know About Life, I Learned From Supervillains” yesterday, and recommend it.) It comes more in finding specific errors in my reasoning or holes in my understanding, and calibrating.
EY’s sequences and early posts are very different from the usual self-improvement stuff. I think people would benefit more from reading the sequences than from staying current on all the new posts (yet I do the latter instead of the former). I know people aren’t reading them, because he has some good posts (old ones, backdated to before LW existed; maybe they were imported from OB) with only a couple of upvotes.
Are there some available statistics about that?
Not that I know of. You can see that green-button posts have higher scores than the white-button posts; but you would expect that in any case; they’re supposed to be better posts.
I believe most people don’t see the white-button posts because
a) If you go to lesswrong.com, you don’t see the white-button posts
b) If you’re reading overcoming bias, it only links to the green-button posts
c) When a post gets a green button, the rate of upvoting increases dramatically, even if the article is several days old.
That sounds reasonable. When I go to lesswrong.com, I usually first look at “Recent Posts” and don’t care about the button color. But it is probably not generic. (I haven’t even known that greenness is awarded by EY.)
When I was initially reading through some of the sequences, I didn’t upvote them at all, and I continue to not upvote them even now.
Initially I didn’t notice the voting mechanism at all, because I hadn’t yet created an account. Then after I registered, I didn’t bother because EY already has a jillion points, and because those posts had already been green-lit so registering my approval wouldn’t have much effect.
I’ve done the same. When I stumbled upon lesswrong, it’s like I got sucked into a vortex. I just kept reading and reading and reading, until I essentially ran out of things to read. I didn’t care about voting or commenting, just reading. Then I realized there are other people here besides Eliezer and other posts too. I realized the community here is pretty interesting, so I decided to join in.
Well, here we come to the gap between “the stated intention of Less Wrong” and “what people actually use it for”. This is surely a big part of the resolution of the gap I pointed out. If people are not using LW to increase their own rationality, then it should be clearer about that. Perhaps it’s my misreading of “refining the art of human rationality”—I assumed that the goal was “by making humans more rational”, but if the goal is just to sit around and have a delightful intellectual wankfest about the deep nature of rationality in isolation from people’s execution of their brains and use of their abilities in real life, then the site is being consistent :).
But this doesn’t seem consistent with Eliezer’s claim “Rationalists win”. I’ve seen enough of life to know that winners spend time building their many different kinds of muscles, not chatting on web forums.
I’m currently reading the old sequences. But felt discouraged to comment since I felt no one would respond anyway.
You’ve got a chance of getting replies from those of us who follow Recent Comments.
Recent comments on old sequences have actually been getting pretty interesting.