A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.
I’ve read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that’s as far as I go. In this post, I try to clearly explain why I don’t participate more and why some of my friends don’t participate at all and have warned me not to participate further.
Rationality doesn’t guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. (Or, you could not believe in free will. But most LWers don’t live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won’t; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It’s not at all clear if general AI is a significant threat. It’s also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it’s necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don’t know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality “training camps” do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because “is” cannot imply “should”). Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I’m struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I “should” do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
“Art of Rationality” is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don’t mind Harry’s narcissism) and LW is is fun to read, but that’s as far as I want to get involved. Unless, that is, there’s someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.
Why I Am Not a Rationalist, or, why several of my friends warned me that this is a cult
A common question here is how the LW community can grow more rapidly. Another is why seemingly rational people choose not to participate.
I’ve read all of HPMOR and some of the sequences, attended a couple of meetups, am signed up for cryonics, and post here occasionally. But, that’s as far as I go. In this post, I try to clearly explain why I don’t participate more and why some of my friends don’t participate at all and have warned me not to participate further.
Rationality doesn’t guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. (Or, you could not believe in free will. But most LWers don’t live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won’t; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It’s not at all clear if general AI is a significant threat. It’s also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it’s necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don’t know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality “training camps” do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because “is” cannot imply “should”). Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I’m struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I “should” do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to spend a lot of time studying Rationality instead of more specific skills). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
“Art of Rationality” is an oxymoron. Art follows (subjective) aesthetic principles; rationality follows (objective) evidence.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don’t mind Harry’s narcissism) and LW is is fun to read, but that’s as far as I want to get involved. Unless, that is, there’s someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.