I think that one failure mode is to think that “rationality” is actually exist as an object in the outside world. But it is akin to mind projection fallacy. It is not an object. It is not even mathematical object, like the digits.
Defining rationality through winning is also not explaining what rationality is. Other things also could produce winning: luck, force, power, rules manipulation, personal effectiveness, genetics, risk taking, or number of trials. Or just interpretation of what is winning.
if some one wins, its is not strong evidence for his rationality. Many politicians, billionaires, sportsmen are winners from the point of views of their peers, but they are not best rationalists.
So, rationality is not a physical object, not mathematical object, and not something we could extract from solving game theory.
Rationality also is not intelligence, as no body knows what is intelligence (or they will be able to build AI).
So may be better to think of rationality as of idealised way of thinking, and not any, but the way of thinking which could be presented in the finite number of finite rules. The last distinction is important as there is another possibility: that best way of thinking is petabyte size neural network, which works great but no body knows how.
By defining rationality as best way of thinking which could be presented in finite set of rules, we hope that this definition would converge into a one and only finite object, the best set of rules, and in this case we would be able to say that rationality actually exist. It may not be true. It could produce several contradictioning sets of rules, or a set of rules for which we can’t mathematically prove that it is actually the best possible set.
Rationality is like communism: the great project which not yet exist, but some steps could be done in its direction. Actual rationality will probably created only with AI.
By defining rationality as best way of thinking which could be presented in finite set of rules, we hope that this definition would converge into a one and only finite object, the best set of rules, and in this case we would be able to say that rationality actually exist.
I think the average LW users would consider that to be Bayesian probability, which indeed have been used as the basis for an idealized AI (called AIXI).
I think that one failure mode is to think that “rationality” is actually exist as an object in the outside world. But it is akin to mind projection fallacy. It is not an object. It is not even mathematical object, like the digits.
Defining rationality through winning is also not explaining what rationality is. Other things also could produce winning: luck, force, power, rules manipulation, personal effectiveness, genetics, risk taking, or number of trials. Or just interpretation of what is winning.
if some one wins, its is not strong evidence for his rationality. Many politicians, billionaires, sportsmen are winners from the point of views of their peers, but they are not best rationalists.
So, rationality is not a physical object, not mathematical object, and not something we could extract from solving game theory.
Rationality also is not intelligence, as no body knows what is intelligence (or they will be able to build AI).
So may be better to think of rationality as of idealised way of thinking, and not any, but the way of thinking which could be presented in the finite number of finite rules. The last distinction is important as there is another possibility: that best way of thinking is petabyte size neural network, which works great but no body knows how.
By defining rationality as best way of thinking which could be presented in finite set of rules, we hope that this definition would converge into a one and only finite object, the best set of rules, and in this case we would be able to say that rationality actually exist. It may not be true. It could produce several contradictioning sets of rules, or a set of rules for which we can’t mathematically prove that it is actually the best possible set.
Rationality is like communism: the great project which not yet exist, but some steps could be done in its direction. Actual rationality will probably created only with AI.
I think the average LW users would consider that to be Bayesian probability, which indeed have been used as the basis for an idealized AI (called AIXI).