“rationality” branding isn’t as good for keeping that front and center, especially compared to, say the effective altruism meme
Perhaps a better branding would be “effective decision making”, or “effective thought”?
As I’ve already explained, there’s a difficult problem here about how to be appropriately modest about our own rationality. When I say something, I never think it’s stupid, otherwise I wouldn’t say it. But at least I’m not so arrogant as to go around demanding other people acknowledge my highly advanced rationality. I don’t demand that they accept “Chris isn’t saying anything stupid” as an axiom in order to engage with me.
I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven’t been placed to notice anyone’s arrogance. But I’m a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I’m significantly less certain of my correctness in any argument.
We know that knowing about biases doesn’t remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn’t even require pride as an expense since we’re also adjusting our estimates of everyone else’s sanity down a similar amount. As a check to see if we’re doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.
Perhaps a better branding would be “effective decision making”, or “effective thought”?
I think this is the core of what you are disliking. Almost all of my reading on LW is in the Sequences rather than the discussion areas, so I haven’t been placed to notice anyone’s arrogance. But I’m a little sadly surprised by your experience because for me, the result of reading the sequences has been to have less trust that my own level of sanity is high. I’m significantly less certain of my correctness in any argument.
We know that knowing about biases doesn’t remove them, so instead of increasing our estimate of our own rationality, it should correct our estimate downwards. This shouldn’t even require pride as an expense since we’re also adjusting our estimates of everyone else’s sanity down a similar amount. As a check to see if we’re doing things right, the result should be less time spent arguing and more time spent thinking about how we might be wrong and how to check our answers. Basically it should remind us to use type 2 thinking more whenever possible, and to seek effectiveness training for our type 1 thinking whenever available.