This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.