Thanks for sharing! I appreciate the feedback but because it’s important to distinguish between “the problem is that you are X” and “the problem is that you look like you are X,” I think it’s worth hashing out whether some points are true.
The sequences and list of top posts on LW are mostly about AI risk
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn’t worth funding. (His views have since changed, a document I think is worth reading in full.)
And the Sequences themselves are rarely if ever directly about AI risk; they’re more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them “no, they’re different” it’s easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say “oh yeah, those are different.” And if you’re just curious about intelligence and morality, then you still end up with a crisper model than you started with!
which to me seems quite tangential to the attempt at modern rekindling of the Western tradition of rational thought
I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of “do I understand this piece of mental machinery well enough to program it?”, which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.)
But I ask you to reconsider if the LW is actually the healthiest part of the rationalist community, or if the more general cause of “advancement of more rational discourse in public life” would be better served by something else (for example, a number of semi-related communities such blogs and forums and meat-space communities in academia). Not all rationalism needs to be LW style rationalism.
I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents.
For the first half, I think considering this involves becoming more precise about ‘healthiest’. On the one hand, LW’s reputation has a lot of black spots, and those basically can’t be washed off, but on the other hand, it doesn’t seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to have a certain level of intellectual maturity that grows over time (as the number of things that are discovered and brought into the LW consensus grows) seems like the sort of thing that is very difficult to do with a number of semi-related communities.
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn’t worth funding.
I grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.
Thanks for sharing! I appreciate the feedback but because it’s important to distinguish between “the problem is that you are X” and “the problem is that you look like you are X,” I think it’s worth hashing out whether some points are true.
Which list of top posts are you thinking of? If you look at the most-upvoted posts on LW, the only one in the top ten about AI risk is Holden Karnofsky explaining, in 2012, why he thought the Singularity Institute wasn’t worth funding. (His views have since changed, a document I think is worth reading in full.)
And the Sequences themselves are rarely if ever directly about AI risk; they’re more often about the precursors to the AI risk arguments. If someone thinks that intelligence and morality are intrinsically linked, instead of telling them “no, they’re different” it’s easier to talk about what intelligence is in detail and talk about what morality is in detail and then they say “oh yeah, those are different.” And if you’re just curious about intelligence and morality, then you still end up with a crisper model than you started with!
I think one of the reasons I consider the Sequences so successful as a work of philosophy is because it keeps coming back to the question of “do I understand this piece of mental machinery well enough to program it?”, which is a live question mostly because one cares about AI. (Otherwise, one might pick other standards for whether or not a debate is settled, or how to judge various approaches to ideas.)
I think everyone is agreed about the last bit; woe betide the movement that refuses to have friends and allies, insisting on only adherents.
For the first half, I think considering this involves becoming more precise about ‘healthiest’. On the one hand, LW’s reputation has a lot of black spots, and those basically can’t be washed off, but on the other hand, it doesn’t seem like reputation strength is the most important thing to optimize for. That is, having a place where people are expected to have a certain level of intellectual maturity that grows over time (as the number of things that are discovered and brought into the LW consensus grows) seems like the sort of thing that is very difficult to do with a number of semi-related communities.
I grant that I was talking out of my memory; the previous time I read the LW stuff was years ago. MIRI and CFAR logos up there did not help.