The art of human rationality may have not been much developed because its practitioners lack a sense that vastly more is possible.
And because we are taught that imagining “vastly more” is naive. Wise people know that we live in the best possible universe because if something could be improved, someone else would have already improved it. (Who are you to believe that you can improve something that people higher-status than you didn’t?)
This is a “reversed stupidity” against people who believe that vast improvement are trivial. (Usually because they completely ignore coordination problems, perverse incentives, or even the obvious negative side-effects of their improvements. “Why don’t we just abolish money and kill/reeducate all the bad people? Then the world would be a nice place without scarcity.”) But the opposite of “trivial” isn’t “impossible”, and great improvements happen regularly, for example the computers and internet.
Then probably the next answer is that improvement is possible, but only by institutions. Which ignores the fact that (1) the institutions don’t grow on trees, they are created by people, and (2) even the institutions can benefit from having competent people at the right place, so that fact that we need institutions is not an argument against personal competence.
How far the craft of rationality can be taken, depends largely on what methods can be invented for verifying it. Tests seem usefully stratifiable into reputational, experimental, and organizational.
So, how far did we get? I think we don’t even have the “rationality dojos” competing against each other, and I’d say it’s time we made some. Maybe local meetups could be considered such. So, how specifically can we make them “fight” against each other? What would be the tournament about? Maybe something general like “you have two months to do an impressive thing, then report about it, and the jury will choose the most impressive winner”.
“the nonconformist cluster”, seems to be stunningly bad at coordinating group projects.
Anyone willing to countersignal, and make a well-cooperating rationalist group just to spite the others? ;)
Suppose that a country of rationalists is attacked by a country of Evil Barbarians who know nothing of probability theory or decision theory.
This makes me realize in near mode how long way is there yet ahead of us. Keeping this website interesting and friendly is already a difficult problem; and it’s incomparable with running a country and facing a war. But maybe that’s a problem specific for online environments, and a real country, or a town, or a village, or even a large house would have a different dynamic, because people could meet in person and cooperate at real-life projects (which could teach them the cooperation skills, and select competent people by things other than posturing).
By the way, what is the largest known group of aspiring rationalists living together?
Practical advice is genuinely much, much more useful when it’s backed up by concrete experimental results, causal models that are actually true, or valid math that is validly interpreted.
This means that for practical purposes only knowledge of cognitive biases in general is not enough. One needs to have knowledge in domains where they want to achieve something. Using the division of labor, someone can research a topic for the others, and then tell them only as much of the theory as they need to deeply understand the advice.
Which means that a website (or other communication tool) of a rationalist community that actually does something would contain articles on things beyond cognitive biases and math. If you want to build a house for rationalists to live together, you need to research something about house-building. You can delegate the whole decision to one person, if you trust their expertise enough, but if you want to have a community discussion, you need to educate the community about the basics, so they can participate in the debate meaningfully. (Perhaps the lesson should not be called “rational house-building”, but as long as the rationalists want to debate house-building, some lesson is required. Of course, lessons produced outside of the community can be used, no need to reinvent the wheel, unless you believe you can communicate the same insights more efficiently.)
When subjects know about a bias or are warned about a bias, overcorrection is not unheard of as an experimental result.
And when many people do this, underconfidence becomes a standard signal of wisdom. (“No, I can’t do anything, because I am not 100% sure it’s the right action, and I am too clever to do a possibly wrong thing.” “Great thinking! Anyone who does anything is clearly not smart enough to realize this.”) Perverse incentives: in many situations, failure is obvious and humiliating, while not taking action is considered normal. It is good to take a step away and realize that in long term the failures are likely to be forgotten, but the successes can become stepping stones to further victories. Even if the failures are more frequent, as long as they are not too costly.
And because we are taught that imagining “vastly more” is naive. Wise people know that we live in the best possible universe because if something could be improved, someone else would have already improved it. (Who are you to believe that you can improve something that people higher-status than you didn’t?)
This is a “reversed stupidity” against people who believe that vast improvement are trivial. (Usually because they completely ignore coordination problems, perverse incentives, or even the obvious negative side-effects of their improvements. “Why don’t we just abolish money and kill/reeducate all the bad people? Then the world would be a nice place without scarcity.”) But the opposite of “trivial” isn’t “impossible”, and great improvements happen regularly, for example the computers and internet.
Then probably the next answer is that improvement is possible, but only by institutions. Which ignores the fact that (1) the institutions don’t grow on trees, they are created by people, and (2) even the institutions can benefit from having competent people at the right place, so that fact that we need institutions is not an argument against personal competence.
So, how far did we get? I think we don’t even have the “rationality dojos” competing against each other, and I’d say it’s time we made some. Maybe local meetups could be considered such. So, how specifically can we make them “fight” against each other? What would be the tournament about? Maybe something general like “you have two months to do an impressive thing, then report about it, and the jury will choose the most impressive winner”.
Anyone willing to countersignal, and make a well-cooperating rationalist group just to spite the others? ;)
This makes me realize in near mode how long way is there yet ahead of us. Keeping this website interesting and friendly is already a difficult problem; and it’s incomparable with running a country and facing a war. But maybe that’s a problem specific for online environments, and a real country, or a town, or a village, or even a large house would have a different dynamic, because people could meet in person and cooperate at real-life projects (which could teach them the cooperation skills, and select competent people by things other than posturing).
By the way, what is the largest known group of aspiring rationalists living together?
This means that for practical purposes only knowledge of cognitive biases in general is not enough. One needs to have knowledge in domains where they want to achieve something. Using the division of labor, someone can research a topic for the others, and then tell them only as much of the theory as they need to deeply understand the advice.
Which means that a website (or other communication tool) of a rationalist community that actually does something would contain articles on things beyond cognitive biases and math. If you want to build a house for rationalists to live together, you need to research something about house-building. You can delegate the whole decision to one person, if you trust their expertise enough, but if you want to have a community discussion, you need to educate the community about the basics, so they can participate in the debate meaningfully. (Perhaps the lesson should not be called “rational house-building”, but as long as the rationalists want to debate house-building, some lesson is required. Of course, lessons produced outside of the community can be used, no need to reinvent the wheel, unless you believe you can communicate the same insights more efficiently.)
And when many people do this, underconfidence becomes a standard signal of wisdom. (“No, I can’t do anything, because I am not 100% sure it’s the right action, and I am too clever to do a possibly wrong thing.” “Great thinking! Anyone who does anything is clearly not smart enough to realize this.”) Perverse incentives: in many situations, failure is obvious and humiliating, while not taking action is considered normal. It is good to take a step away and realize that in long term the failures are likely to be forgotten, but the successes can become stepping stones to further victories. Even if the failures are more frequent, as long as they are not too costly.