There is also one more level of rationality, which is often assumed, but not presented. I would call it the inner definition of rationality:
“Rationality is a behavior which could be presented by a short set of simple rules”. The rules include math, Byes theorem, utility function, virtues of rationality, you name it.
The main problem is following: does “inner definition” of rationality corresponds to the “outer definition”, that is rationality as winning? That is, does knowing correct short set of rules results in constant winning?
If we think that yes, then all rationality manuals are useful, as by installing correct set of shots rules, we will get perfect rationality and will start to win.
However, if winning requires something extremely complex, like a very large neural net, which can’t be presented by a short set of rules, we need to update our inner definition of rationality.
For example, a complex neural net may win in cat recognition, but it doesn’t know any set of rules how to recognize a cat.
There is also one more level of rationality, which is often assumed, but not presented. I would call it the inner definition of rationality:
“Rationality is a behavior which could be presented by a short set of simple rules”. The rules include math, Byes theorem, utility function, virtues of rationality, you name it.
The main problem is following: does “inner definition” of rationality corresponds to the “outer definition”, that is rationality as winning? That is, does knowing correct short set of rules results in constant winning?
If we think that yes, then all rationality manuals are useful, as by installing correct set of shots rules, we will get perfect rationality and will start to win.
However, if winning requires something extremely complex, like a very large neural net, which can’t be presented by a short set of rules, we need to update our inner definition of rationality.
For example, a complex neural net may win in cat recognition, but it doesn’t know any set of rules how to recognize a cat.