Jacobian and Scott, you should do the adversarial collaboration thing!
I suspect that the debate about rationalist self-improvement has a problem similar to the “nature vs nurture” debates: it depends on the population. Take a population of literally clones, and all their differences can be explained by environment. Take a population that lives in a uniform environment, and their differences will be mostly genetic.
Similarly, if you already lived in a subculture that gave you the good answers and good habits, all that rationality can do is to give you a better justification for what you already know, and maybe somewhat prepare you for situations when you might meet something you don’t know yet. On the other hand, if your environment got many things wrong, and you already kinda suspect it but the peer pressure is strong, learning the right answers and finding the people who accept them is a powerful change.
Scott makes a good point that the benefits of rationality not only fail to live up to expectation based on the rationalist fictional evidence, but often seem invisible. (It’s not just that I fail to be Anasûrimbor Kellhus and change the world, but I mostly fail to overcome my own procrastination. And even if I feel an improvement initially, my life seven years after finding Less Wrong doesn’t seem visibly different from outside.) Jacobian makes a good point that in a world with already existing great differences, a significant improvement for an individual is still invisible in the large scale. (Doubling my income and living in greater peace is a huge thing for me, but people counting the number of successful startups will be unimpressed.)
It is also difficult to see how x-rationality changed my life, because I cannot control for all the other things that happened during recent years. Things I found on LW, I could have found them elsewhere. I can’t see the things that didn’t happen to me thanks to LW. I can’t be even sure about the impact on myself, so how could I talk about the community as a whole?
Jacobian and Scott, you should do the adversarial collaboration thing!
I suspect that the debate about rationalist self-improvement has a problem similar to the “nature vs nurture” debates: it depends on the population. Take a population of literally clones, and all their differences can be explained by environment. Take a population that lives in a uniform environment, and their differences will be mostly genetic.
Similarly, if you already lived in a subculture that gave you the good answers and good habits, all that rationality can do is to give you a better justification for what you already know, and maybe somewhat prepare you for situations when you might meet something you don’t know yet. On the other hand, if your environment got many things wrong, and you already kinda suspect it but the peer pressure is strong, learning the right answers and finding the people who accept them is a powerful change.
Scott makes a good point that the benefits of rationality not only fail to live up to expectation based on the rationalist fictional evidence, but often seem invisible. (It’s not just that I fail to be Anasûrimbor Kellhus and change the world, but I mostly fail to overcome my own procrastination. And even if I feel an improvement initially, my life seven years after finding Less Wrong doesn’t seem visibly different from outside.) Jacobian makes a good point that in a world with already existing great differences, a significant improvement for an individual is still invisible in the large scale. (Doubling my income and living in greater peace is a huge thing for me, but people counting the number of successful startups will be unimpressed.)
It is also difficult to see how x-rationality changed my life, because I cannot control for all the other things that happened during recent years. Things I found on LW, I could have found them elsewhere. I can’t see the things that didn’t happen to me thanks to LW. I can’t be even sure about the impact on myself, so how could I talk about the community as a whole?