Thanks for being bold enough to share your dissenting views. I’m voting you up just for that, given the reasoning I outline here.
I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I’m not sure about the usefulness of debating the goodness or badness of “LW” as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like “LW seems weird, and weird things make me feel uncomfortable” that is not as valuable.
I’ll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don’t want to change it without the change being a consensus decision.) In my opinion, the sequences are a lot longer than they need to be, not especially information-dense, and also hard to update (there have been controversies over whether some point or another in the Sequences is correct, but those controversies never get appended to the Sequences).
Rationality doesn’t guarantee correctness. Given some data, rational thinking can get to the facts accurately, i.e. say what “is”. But, deciding what to do in the real world requires non-rational value judgments to make any “should” statements. (Or, you could not believe in free will. But most LWers don’t live like that.) Additionally, huge errors are possible when reasoning beyond limited data. Many LWers seem to assume that being as rational as possible will solve all their life problems. It usually won’t; instead, a better choice is to find more real-world data about outcomes for different life paths, pick a path (quickly, given the time cost of reflecting), and get on with getting things done. When making a trip by car, it’s not worth spending 25% of your time planning to shave off 5% of your time driving. In other words, LW tends to conflate rationality and intelligence.
I’m having a hard time forming a single coherent argument out of this paragraph. Yep, value judgements are important. I don’t think anyone on Less Wrong denies this. Yep, it’s hard to extrapolate beyond limited data. Is there a a particular LW post that advocates extrapolating based on limited data? I haven’t seen one. If so, that sounds like a problem with the post, not with LW in general. Yes, learning from real-world data is great. I think LW does a decent job of this; we are frequently citing studies. Yes, it’s possible to overthink things, and maybe LW does this. It might be especially useful to point to a specific instance where you think it happened.
I have found in my work as an engineer that untested theories are usually wrong for unexpected reasons, and it’s necessary to build and test prototypes in the real world.
Makes sense. In my work as a software developer, I’ve found that it’s useful to think for a bit about what I’m going to program before I program it. My understanding is that mathematicians frequently prove theorems, etc. without testing them, and this is considered useful. So to the extent that AI is like programming/math, highly theoretical work may be useful.
My strong suspicion is that the best way to reduce existential risk is to build (non-nanotech) self-replicating robots using existing technology and online ordering of materials, and use the surplus income generated to brute-force research problems, but I don’t know enough about manufacturing automation to be sure.
This seems like it deserves its own Open Thread comment/post if you want to explain it in detail. (I assume you have arguments for this idea as opposed to having it pop in to your head fully formed :])
One way this happens is by encouraging contempt for less-rational Normals.
I agree this is a problem.
I imagine the rationality “training camps” do this to an even greater extent.
I went to a 4-day CFAR workshop. I found the workshop disappointing overall (for reasons that probably don’t apply to other people), but I didn’t see the “contempt for less-rational Normals” you describe present at the workshop. There were a decent number of LW-naive folks there, and they didn’t seem to be treated differently. Based on talking to CFAR employees, they are wise to some of the problems you describe and are actively trying to fight them.
LW recruiting (hpmor, meetup locations near major universities) appears to target socially awkward intellectuals (incl. me) who are eager for new friends and a “high-status” organization to be part of, and who may not have many existing social ties locally.
Well sure, I might as well say that Comic-Con or Magic the Gathering attracts socially awkward people without many existing social ties. “LW recruiting” is not quite as strategic as you make it out to be (I’m speaking as someone who knows most of the CFAR and MIRI employees, goes to lots of LWer parties in the Bay Area, used to be housemates with lukeprog, etc.) I’m not saying it’s not a thing… after the success of HPMOR, there have been efforts to capitalize on its success more fully. To the extent that specific types of people are “targeted”, I’d say that intelligence is the #1 attribute. My guess is if you were to poll people at MIRI & CFAR, and other high-status Bay Area LW people like the South Bay meetup organizers, etc. if anything they would have a strong preference for having community members who are socially skilled and well-connected over socially awkward folks.
For the Rationality movement, the problems (sadness! failure! future extinction!) are blamed on a Lack of Rationality, and the long plan of reading the sequences, attending meetups, etc. never achieves the impossible goal of Rationality (impossible because “is” cannot imply “should”).
Rationality seems like a pretty vague “solution” prescription. To the extent that there exists a hypothetical “LW consensus” on this topic, I think it would be that going to a CFAR workshop would solve these problems more effectively than reading the sequences, and a CFAR workshop is not much like reading the sequences.
LW members who are conventionally successful (e.g. PhD students at top-10 universities) typically became so before learning about LW
Well, I think I have become substantially more successful during the time period when I’ve been a member of the LW community (got in to a prestigious university and am now working at a high paying job), and I think I can attribute LW to some of this success (my first two internships were at startups I met through the bay area LW network, and I think my mental health improved from making friends who think the same way I do). But that’s just an anecdote.
“Art of Rationality” is an oxymoron.
Agreed. One could level similar criticisms at books with titles like The Art of Electronics or The Art of Computer Programming. But I think Eliezer made a mistake in trying to make rationality seem kind of cool and deep and wise in order to get people interested in it. (I think I remember reading him write this somewhere; can’t remember where.)
Where available, I would emphasize the original source material over the sequence rehash of them.
This would greatly lower the Phyg Phactor, limit in group jargon, better signal to outsiders who also value that source material, and possibly create ties to other existing communities.
I strongly disagree with this. I don’t care about cult factor: The sequences are vastly more readable than the original sources. Almost every time I’ve tried to read stuff a sequence post is based on I’ve found it boring and given up. The original sources already exist and aren’t attracting communities of new leaders who want to talk about and do stuff based on them! We don’t need to add to that niche. We are in a different niche.
Almost every time I’ve tried to read stuff a sequence post is based on I’ve found it boring and given up.
I didn’t. I’ve read them all. Don’t know how someone finds Jaynes “boring”, but different strokes, etc.
The original sources already exist and aren’t attracting communities of new leaders who want to talk about and do stuff based on them!
Phyg +1
Jaynes, Pearl, Hahneman, and Korzybski had followings long before LW and the sequences existed. Korzybski’s Institute for General Semantics has been around since 1938, and was fairly influential, intellectually and culturally. They actually have some pretty good summary material, if reading Korzybski isn’t your thing (and I can understand that one, as he was a tiresome windbag).
If you like the sequences, great, read them. I think you’re missing out on a lot if you don’t read the originals.
Simply as an outreach method, listing the various influences would pique more interest than “We’ve got a smart guy here who wrote a lot of articles! Come read them!” The sequences aren’t the primary outreach advantage here—HPMOR is. Much like Rand’s novels are for her.
My outreach method is usually not to do that but to link to a specific article about whatever we happened to be talking about which is a lot faster than saying “Here read a textbook on probability” or “look at this tversky and kahneman study!”
We could direct people to Wikipedia’s list of cognitive biases (putting effort in to improving the articles as appropriate and getting a few people to add the articles to their Wikipedia watchlists). Improving Wikipedia articles has the positive externality of helping anyone who reads the article (of which the LW-curious will make up a relatively small fraction).
I think the ideal way to present rationality might be a diagnostic test that lets you know where your rationality weaknesses are and how to improve them, but I’m not sure if this is doable/practical.
Thanks for being bold enough to share your dissenting views. I’m voting you up just for that, given the reasoning I outline here.
I think you are good job detaching the ideas of LW that you think are valuable and adopting them and ditching the others. Kudos. Overall, I’m not sure about the usefulness of debating the goodness or badness of “LW” as a single construct. It seems more useful to discuss specific ideas and make specific criticisms. For example, I think lukeprog offered a good specific criticism of LW thinking/social norms here. In general, if people take the time to really think clearly and articulate their criticisms, I consider that extremely valuable. On the opposite end of the spectrum, if someone says something like “LW seems weird, and weird things make me feel uncomfortable” that is not as valuable.
I’ll offer a specific criticism: I think we should de-emphasize the sequences in the LW introductory material (FAQ, homepage, about page). (Yes, I was the one who wrote most of the LW introductory material, but I was trying to capture the consensus of LW at the time I wrote it, and I don’t want to change it without the change being a consensus decision.) In my opinion, the sequences are a lot longer than they need to be, not especially information-dense, and also hard to update (there have been controversies over whether some point or another in the Sequences is correct, but those controversies never get appended to the Sequences).
I’m having a hard time forming a single coherent argument out of this paragraph. Yep, value judgements are important. I don’t think anyone on Less Wrong denies this. Yep, it’s hard to extrapolate beyond limited data. Is there a a particular LW post that advocates extrapolating based on limited data? I haven’t seen one. If so, that sounds like a problem with the post, not with LW in general. Yes, learning from real-world data is great. I think LW does a decent job of this; we are frequently citing studies. Yes, it’s possible to overthink things, and maybe LW does this. It might be especially useful to point to a specific instance where you think it happened.
Makes sense. In my work as a software developer, I’ve found that it’s useful to think for a bit about what I’m going to program before I program it. My understanding is that mathematicians frequently prove theorems, etc. without testing them, and this is considered useful. So to the extent that AI is like programming/math, highly theoretical work may be useful.
This seems like it deserves its own Open Thread comment/post if you want to explain it in detail. (I assume you have arguments for this idea as opposed to having it pop in to your head fully formed :])
I agree this is a problem.
I went to a 4-day CFAR workshop. I found the workshop disappointing overall (for reasons that probably don’t apply to other people), but I didn’t see the “contempt for less-rational Normals” you describe present at the workshop. There were a decent number of LW-naive folks there, and they didn’t seem to be treated differently. Based on talking to CFAR employees, they are wise to some of the problems you describe and are actively trying to fight them.
Well sure, I might as well say that Comic-Con or Magic the Gathering attracts socially awkward people without many existing social ties. “LW recruiting” is not quite as strategic as you make it out to be (I’m speaking as someone who knows most of the CFAR and MIRI employees, goes to lots of LWer parties in the Bay Area, used to be housemates with lukeprog, etc.) I’m not saying it’s not a thing… after the success of HPMOR, there have been efforts to capitalize on its success more fully. To the extent that specific types of people are “targeted”, I’d say that intelligence is the #1 attribute. My guess is if you were to poll people at MIRI & CFAR, and other high-status Bay Area LW people like the South Bay meetup organizers, etc. if anything they would have a strong preference for having community members who are socially skilled and well-connected over socially awkward folks.
Rationality seems like a pretty vague “solution” prescription. To the extent that there exists a hypothetical “LW consensus” on this topic, I think it would be that going to a CFAR workshop would solve these problems more effectively than reading the sequences, and a CFAR workshop is not much like reading the sequences.
Well, I think I have become substantially more successful during the time period when I’ve been a member of the LW community (got in to a prestigious university and am now working at a high paying job), and I think I can attribute LW to some of this success (my first two internships were at startups I met through the bay area LW network, and I think my mental health improved from making friends who think the same way I do). But that’s just an anecdote.
Agreed. One could level similar criticisms at books with titles like The Art of Electronics or The Art of Computer Programming. But I think Eliezer made a mistake in trying to make rationality seem kind of cool and deep and wise in order to get people interested in it. (I think I remember reading him write this somewhere; can’t remember where.)
[pollid:737]
Note: Here’s Yvain on why the sequences are great, to provide some counterpoint to my criticism above.
Where available, I would emphasize the original source material over the sequence rehash of them.
This would greatly lower the Phyg Phactor, limit in group jargon, better signal to outsiders who also value that source material, and possibly create ties to other existing communities.
Needed: LW wiki translations of LW jargon into the proper term in philosophy. (Probably on the existing jargon page.)
I strongly disagree with this. I don’t care about cult factor: The sequences are vastly more readable than the original sources. Almost every time I’ve tried to read stuff a sequence post is based on I’ve found it boring and given up. The original sources already exist and aren’t attracting communities of new leaders who want to talk about and do stuff based on them! We don’t need to add to that niche. We are in a different niche.
Seconded. I think HPMOR and the Sequences are a better introduction to rationality than the primary texts would be.
I didn’t. I’ve read them all. Don’t know how someone finds Jaynes “boring”, but different strokes, etc.
Phyg +1
Jaynes, Pearl, Hahneman, and Korzybski had followings long before LW and the sequences existed. Korzybski’s Institute for General Semantics has been around since 1938, and was fairly influential, intellectually and culturally. They actually have some pretty good summary material, if reading Korzybski isn’t your thing (and I can understand that one, as he was a tiresome windbag).
If you like the sequences, great, read them. I think you’re missing out on a lot if you don’t read the originals.
Simply as an outreach method, listing the various influences would pique more interest than “We’ve got a smart guy here who wrote a lot of articles! Come read them!” The sequences aren’t the primary outreach advantage here—HPMOR is. Much like Rand’s novels are for her.
My outreach method is usually not to do that but to link to a specific article about whatever we happened to be talking about which is a lot faster than saying “Here read a textbook on probability” or “look at this tversky and kahneman study!”
Then again I don’t do a ton of LW outreach
We could direct people to Wikipedia’s list of cognitive biases (putting effort in to improving the articles as appropriate and getting a few people to add the articles to their Wikipedia watchlists). Improving Wikipedia articles has the positive externality of helping anyone who reads the article (of which the LW-curious will make up a relatively small fraction).
I think the ideal way to present rationality might be a diagnostic test that lets you know where your rationality weaknesses are and how to improve them, but I’m not sure if this is doable/practical.