I’m unlucky enough to know a few postmodernists, and what I find most striking about them is that they try very hard to stay out of conflict with each other.
That makes sense because when they do argue, due to their lack of a clear method for assessing who (if anybody) is in the right, the arguments are unproductive, frustrating, and can get quite nasty.
So I don’t think we’re too similar to them. That said, the obvious way to check our sanity would be to have outsiders look at us. In order to do that, we’d probably have to convince outsiders to give a fuck about us.
As an outsider, here are some criticisms (below). I’ve read all of HPMOR and some of the sequences, attended a couple of meetups, and am signed up for cryonics. But, I have little interest in reading more of the sequences and no interest in more in-person meetings.
Rationality doesn’t guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. (Or, you could not believe in free will. But most LWers don’t live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won’t; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It’s not at all clear if general AI is a significant threat. It’s also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it’s necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don’t know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality “training camps” do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because “is” cannot imply “should”). Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I’m struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I “should” do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to “training camps” for a few months, then try and fail to start a startup, increasing the likelihood that they’ll go back and work for LW at below-market rates and earn less money for the rest of their life due to not having the PhD from the top-10 school). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don’t mind Harry’s narcissism) and LW is is fun to read, but that’s as far as I want to get involved. Unless, that is, there’s someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.
If I may just focus on one of your critiques, I’d like to say that the thing about the cult-like structure… I’m not sure whether that actually results in the cult effect on LW or not, but the general idea both intrigues and terrifies me.
Especially the “contempt for less-rational Normals” thing- I haven’t noticed that in myself but the possibility* of that happening by itself is… interesting, with what I know of LW. I have almost never seen anyone on LW really condemning anyone specific as “irrational”, except maybe a couple celebrities, or doing anything that could relate to actively urging others to several ties, but I have this image that individuals in LW could potentially often sever ties with people they see as less rational as a result, without anybody actually intending it or even realizing it.
*or at least, my views of people who I perceive of as being less rational is pretty much unchanged from before LW, which is the important part. Especially in the case of social interaction, rather than discussing serious issues. It’s possible I might be unusual compared to some nerds on this; I tend to not care too much whether or not the people I interact with are especially smart or even that our interactions are anything but vapid nonsense, as long as I enjoy interacting with them.
I’m unlucky enough to know a few postmodernists, and what I find most striking about them is that they try very hard to stay out of conflict with each other.
That makes sense because when they do argue, due to their lack of a clear method for assessing who (if anybody) is in the right, the arguments are unproductive, frustrating, and can get quite nasty.
So I don’t think we’re too similar to them. That said, the obvious way to check our sanity would be to have outsiders look at us. In order to do that, we’d probably have to convince outsiders to give a fuck about us.
As an outsider, here are some criticisms (below). I’ve read all of HPMOR and some of the sequences, attended a couple of meetups, and am signed up for cryonics. But, I have little interest in reading more of the sequences and no interest in more in-person meetings.
Rationality doesn’t guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. (Or, you could not believe in free will. But most LWers don’t live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won’t; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
In particular, AI risk is overstated There are a bunch of existential threats (asteroids, nukes, pollution, unknown unknowns, etc.). It’s not at all clear if general AI is a significant threat. It’s also highly doubtful that the best way to address this threat is writing speculative research papers, because I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it’s necessary to build and test prototypes in the real world. My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don’t know enough about manufacturing automation to be sure.
LW has a cult-like social structure. The LW meetups (or, the ones I experienced) are very open to new people. Learning the keywords and some of the cached thoughts for the LW community results in a bunch of new friends and activities to do. However, involvement in LW pulls people away from non-LWers. One way this happens is by encouraging contempt for less-rational Normals. I imagine the rationality “training camps” do this to an even greater extent. LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
Many LWers are not very rational. A lot of LW is self-help. Self-help movements typically identify common problems, blame them on (X), and sell a long plan that never quite achieves (~X). For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because “is” cannot imply “should”). Rationalists tend to have strong value judgments embedded in their opinions, and they don’t realize that these judgments are irrational.
LW membership would make me worse off. Though LW membership is an OK choice for many people needing a community (joining a service organization could be an equally good choice), for many others it is less valuable than other activities. I’m struggling to become less socially awkward, more conventionally successful, and more willing to do what I enjoy rather than what I “should” do. LW meetup attendance would work against me in all of these areas. LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW, and the LW community may or may not support their continued success (e.g. may encourage them, with only genuine positive intent, to drop out of their PhD program, go to “training camps” for a few months, then try and fail to start a startup, increasing the likelihood that they’ll go back and work for LW at below-market rates and earn less money for the rest of their life due to not having the PhD from the top-10 school). Ideally, LW/Rationality would help people from average or inferior backgrounds achieve more rapid success than the conventional path of being a good student, going to grad school, and gaining work experience, but LW, though well-intentioned and focused on helping its members, doesn’t actually create better outcomes for them.
I desperately want to know the truth, and especially want to beat aging so I can live long enough to find out what is really going on. HPMOR is outstanding (because I don’t mind Harry’s narcissism) and LW is is fun to read, but that’s as far as I want to get involved. Unless, that is, there’s someone here who has experience programming vision-guided assembly-line robots who is looking for a side project with world-optimization potential.
If I may just focus on one of your critiques, I’d like to say that the thing about the cult-like structure… I’m not sure whether that actually results in the cult effect on LW or not, but the general idea both intrigues and terrifies me.
Especially the “contempt for less-rational Normals” thing- I haven’t noticed that in myself but the possibility* of that happening by itself is… interesting, with what I know of LW. I have almost never seen anyone on LW really condemning anyone specific as “irrational”, except maybe a couple celebrities, or doing anything that could relate to actively urging others to several ties, but I have this image that individuals in LW could potentially often sever ties with people they see as less rational as a result, without anybody actually intending it or even realizing it.
*or at least, my views of people who I perceive of as being less rational is pretty much unchanged from before LW, which is the important part. Especially in the case of social interaction, rather than discussing serious issues. It’s possible I might be unusual compared to some nerds on this; I tend to not care too much whether or not the people I interact with are especially smart or even that our interactions are anything but vapid nonsense, as long as I enjoy interacting with them.