I’ve debated myself about writing a detailed reply, since I don’t want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I’m coming from.
I’m in more-or-less the same position as you are. The main difference being that I’ve read pretty much all of the Sequences (and am slowly rereading them) and I haven’t signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW—Non-LW scale are pretty similar.
And yet my experience has been almost completely opposite of yours. I don’t like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I’m going to have to do it.
Rationality doesn’t guarantee correctness.
I’m not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.
Some short points, though:
Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. This is in a part of the Sequences you’ve probably haven’t read. I generally advice “Three Worlds Collide” to people struggling with this distinction, but I haven’t gotten any feedback on how useful that is.;
Rationality can help you make “should”-statements, if you know what your preferences are. It helps you optimize towards your preferences.
When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving.
I believe the Sequences give the example that to be good at baseball, one shouldn’t calculate the trajectory of the ball. One should just use the intuitive “ball-catching” parts of the brain and train those. While overanalyzing things seems to be a bit of a hobby for the aspiring rationalist community, if you think that they’re the sort of persons who will spend 25% of their time to shave 5% of driving time you’re simply wrong about who’s in that particular community.
LW tends to conflate rationality and intelligence.
This is actually a completely different issue. One worth addressing, but not as part of “rationality doesn’t guarantee correctness.”
In particular, AI risk is overstated
I’m not the best suited to answer this, and it’s mostly about your estimate towards that particular risk. As ChristianKl points out, a big chunk of this community doesn’t even think Unfriendly AGI is currently the biggest risk for humanity.
What I will say is that if AGI is possible (which I think it is), than UFAI is a risk. And since Friendliness is likely to be as hard as actually solving AGI, it’s good that groundwork is being lain before AGI is becoming a reality. At least, that how I see it. I’d rather have some people working on that issue than none at all. Especially if the people working for MIRI are best at working on FAI, rather than another existential risk.
LW has a cult-like social structure
No more than any other community. Everything you say in that part could be applied to the time I got really into Magic: The Gathering.
I don’t think Less Wrong targets “socially awkward intellectuals” inasmuch as it was founded by socially awkward intellectuals and that socially awkward intellectuals are more likely to find the presented material interesting.
However, involvement in LW pulls people away from non-LWers.
This has, in my case, not been true. My relationships with my close friends haven’t changed one bit because of Less Wrong or the surrounding community, nor have my other personal relationships. If anything, Less Wrong has made me more likely to meet new people or do things with people I don’t have a habit doing things with. LessWrong showed my that I needed a community to support myself (a need that I didn’t consciously realized I had before) and HPMOR taught me a much-needed lesson about passing up on opportunities.
For the sake of honesty and completeness, I must say that I do very much enjoy the company of aspiring rationalists, both in meatspace at the meetups or in cyberspace (through various channels, mostly reddit, tumblr and skype). Fact of the matter is, you can talk about different things with aspiring rationalists. The inferential differences are smaller on some subjects. Just like how the inferential differences about the intricacies of Planeswalkers and magic are lower with my Magic: The Gathering friends.
Many LWers are not very rational.
This is only sorta true. Humans in general aren’t very rational. Knowing this gets you part of the way. Reading Influence: Science and Practice or Thinking: Fast and Slow won’t turn you into a god, but they can help you realize some mistakes you are making. And that still remains hard for all but the most orthodox aspiring rationalists. And I keep using “aspiring rationalists” because I think that sums it up: The Less Wrong-sphere just strives to do better than default in the area of both epistemic and instrumental rationality. I can’t think of anyone I’ve met (online or off-) that believes that “perfect rationality” is a goal mere humans can attain.
And it’s hard to measure degrees of rationality. Ideally, LWers should be more rational than average, but you can’t quite measure that, can you. My experience is that aspiring rationalists at least put in greater effort to reaching their goals.
For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality
Rationality is a tool, not a goal. And the best interventions in my life have been shorter-term: Get more exercise, use HabitRPG, be aware of your preferences, Ask, Tell and Guess culture, Tsuyoku Naritai, Spaced Repetition Software… are the first things that come to mind that I use regularly that do actually improve my life and help me reach my goals.
And as anecdotal evidence: I once put it to the skype-group of rationalists that I converse with that every time I had no money, I felt like I was a bad rationalist, since I wasn’t “winning.” Not a single one blamed it on a Lack of Rationality.
Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
If you want to understand that behavior, I encourage you to read the Sequences on morality. I could try to explain it, but I don’t think I can do it justice. I generally hate the “just read the Sequences”-advice, but here I think it’s applicable.
LW membership would make me worse off.
This is where I disagree the biggest. (Well, not that it would make you worse off. I won’t judge that.) Less Wrong has most definitely improved my life. The suggestion to use HabitRPG or leechblock, the stimulating conversations and boardgames I have at the meetup each month, the lessons I learned here that I could apply in my job, discovering my sexual orientation, having new friends, picking up a free concert, being able to comfort my girlfriend more effectively, being able to better figure out which things are true, doing more social things… Those are just the things I can think of off the top of my mind at 3.30 AM that Less Wrong allowed me to do.
I don’t intend to convince you to become more active on Less Wrong. Hell, I’m not all that active on Less Wrong, but it has changed my life for the better in a way that a different community wouldn’t have done.
Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
It does, at least for me, and I seriously doubt that I’m the only one. I haven’t reached a successful career (yet, working on that), but my life is more successful in other areas thanks in part to Less Wrong. (And my limited career-related successes are, in part, attributable to Less Wrong.) I can’t quantify how much this success can be attributed to LW, but that’s okay, I think. I’m reasonably certain that it played a significant part. If you have a way to measure this, I’ll measure it.
“Art of Rationality” is an oxymoron.
I like that phrase because it’s a reminder that (A) humans aren’t perfectly rational and require practice to become better rationalists and (B) that rationality is a thing you need to do constantly. I like this SSC post as an explanation.
Based on this feedback, I think my criticisms reflect mostly on my fit with the LWers I happened to meet, and on my unreasonably high standards for a largely informal group.
I’ve debated myself about writing a detailed reply, since I don’t want to come across as some brainwashed LW fanboi. Then I realized this was a stupid reason for not making a post. Just to clarify where I’m coming from.
I’m in more-or-less the same position as you are. The main difference being that I’ve read pretty much all of the Sequences (and am slowly rereading them) and I haven’t signed up for cryonics. Maybe those even out. I think we can say that our positions on the LW—Non-LW scale are pretty similar.
And yet my experience has been almost completely opposite of yours. I don’t like the point-by-point response on this sort of thing, but to properly respond and lay out my experiences, I’m going to have to do it.
I’m not going to spend much time on this one, seeing as how pretty much everyone else commented on this part of your post.
Some short points, though:
Rationality can help you make “should”-statements, if you know what your preferences are. It helps you optimize towards your preferences.
I believe the Sequences give the example that to be good at baseball, one shouldn’t calculate the trajectory of the ball. One should just use the intuitive “ball-catching” parts of the brain and train those. While overanalyzing things seems to be a bit of a hobby for the aspiring rationalist community, if you think that they’re the sort of persons who will spend 25% of their time to shave 5% of driving time you’re simply wrong about who’s in that particular community.
This is actually a completely different issue. One worth addressing, but not as part of “rationality doesn’t guarantee correctness.”
I’m not the best suited to answer this, and it’s mostly about your estimate towards that particular risk. As ChristianKl points out, a big chunk of this community doesn’t even think Unfriendly AGI is currently the biggest risk for humanity.
What I will say is that if AGI is possible (which I think it is), than UFAI is a risk. And since Friendliness is likely to be as hard as actually solving AGI, it’s good that groundwork is being lain before AGI is becoming a reality. At least, that how I see it. I’d rather have some people working on that issue than none at all. Especially if the people working for MIRI are best at working on FAI, rather than another existential risk.
No more than any other community. Everything you say in that part could be applied to the time I got really into Magic: The Gathering.
I don’t think Less Wrong targets “socially awkward intellectuals” inasmuch as it was founded by socially awkward intellectuals and that socially awkward intellectuals are more likely to find the presented material interesting.
This has, in my case, not been true. My relationships with my close friends haven’t changed one bit because of Less Wrong or the surrounding community, nor have my other personal relationships. If anything, Less Wrong has made me more likely to meet new people or do things with people I don’t have a habit doing things with. LessWrong showed my that I needed a community to support myself (a need that I didn’t consciously realized I had before) and HPMOR taught me a much-needed lesson about passing up on opportunities.
For the sake of honesty and completeness, I must say that I do very much enjoy the company of aspiring rationalists, both in meatspace at the meetups or in cyberspace (through various channels, mostly reddit, tumblr and skype). Fact of the matter is, you can talk about different things with aspiring rationalists. The inferential differences are smaller on some subjects. Just like how the inferential differences about the intricacies of Planeswalkers and magic are lower with my Magic: The Gathering friends.
This is only sorta true. Humans in general aren’t very rational. Knowing this gets you part of the way. Reading Influence: Science and Practice or Thinking: Fast and Slow won’t turn you into a god, but they can help you realize some mistakes you are making. And that still remains hard for all but the most orthodox aspiring rationalists. And I keep using “aspiring rationalists” because I think that sums it up: The Less Wrong-sphere just strives to do better than default in the area of both epistemic and instrumental rationality. I can’t think of anyone I’ve met (online or off-) that believes that “perfect rationality” is a goal mere humans can attain.
And it’s hard to measure degrees of rationality. Ideally, LWers should be more rational than average, but you can’t quite measure that, can you. My experience is that aspiring rationalists at least put in greater effort to reaching their goals.
Rationality is a tool, not a goal. And the best interventions in my life have been shorter-term: Get more exercise, use HabitRPG, be aware of your preferences, Ask, Tell and Guess culture, Tsuyoku Naritai, Spaced Repetition Software… are the first things that come to mind that I use regularly that do actually improve my life and help me reach my goals.
And as anecdotal evidence: I once put it to the skype-group of rationalists that I converse with that every time I had no money, I felt like I was a bad rationalist, since I wasn’t “winning.” Not a single one blamed it on a Lack of Rationality.
If you want to understand that behavior, I encourage you to read the Sequences on morality. I could try to explain it, but I don’t think I can do it justice. I generally hate the “just read the Sequences”-advice, but here I think it’s applicable.
This is where I disagree the biggest. (Well, not that it would make you worse off. I won’t judge that.) Less Wrong has most definitely improved my life. The suggestion to use HabitRPG or leechblock, the stimulating conversations and boardgames I have at the meetup each month, the lessons I learned here that I could apply in my job, discovering my sexual orientation, having new friends, picking up a free concert, being able to comfort my girlfriend more effectively, being able to better figure out which things are true, doing more social things… Those are just the things I can think of off the top of my mind at 3.30 AM that Less Wrong allowed me to do.
I don’t intend to convince you to become more active on Less Wrong. Hell, I’m not all that active on Less Wrong, but it has changed my life for the better in a way that a different community wouldn’t have done.
It does, at least for me, and I seriously doubt that I’m the only one. I haven’t reached a successful career (yet, working on that), but my life is more successful in other areas thanks in part to Less Wrong. (And my limited career-related successes are, in part, attributable to Less Wrong.) I can’t quantify how much this success can be attributed to LW, but that’s okay, I think. I’m reasonably certain that it played a significant part. If you have a way to measure this, I’ll measure it.
I like that phrase because it’s a reminder that (A) humans aren’t perfectly rational and require practice to become better rationalists and (B) that rationality is a thing you need to do constantly. I like this SSC post as an explanation.
Thanks for the detailed reply!
Based on this feedback, I think my criticisms reflect mostly on my fit with the LWers I happened to meet, and on my unreasonably high standards for a largely informal group.
Upvoted for updating.
One could reasonably expect significantly less.