First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
That’s in some ways easier—basically this comes down to standard arguments in decision theory, I think...
Since real gambles are always also part of the state of the world that one’s utility function is defined over, you also need the moral principle that there shouldn’t be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it’s easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)
First, you have to explain why relying on external math, rather than on a hunch, is a good idea. Second, you need to present a case for why shutting up and multiplying in this particular way is a good idea.
That applies to Bayesian reasoning too, doesn’t it?
That’s in some ways easier—basically this comes down to standard arguments in decision theory, I think...
This applies to anything, including excavators and looking up weather on the Internet. You have to trust your tools, which is especially hard where your intuition cries “Don’t trust! It’s dangerous! It’s useless! It’s wrong!”. The technical level, where you go into the details of how your tools work, is not fundamentally different in this respect.
Here I’m focusing not on defining what is actually useful, or right, or true, but on looking into the process of how people can adopt useful tools or methods. A decision of some specific human ape-brain is a necessary part, even if the tool in question is some ultimate abstract ideal nonsense. I’m brewing a mini-sequence on this (2 or 3 posts).
I think that if there is such a thing as x-rationality, its heart is that mathematical models of rationality based on probability and decision theory are the correct measure against which we compare our own efforts.
At which point you run into a problem of formalization and choice of parameters, which is the same process of ape-brain-based decision-making. A statement that in some sense, decision theory/probability theory math is the correct way of looking at things, is somewhat useful, but doesn’t give the ultimate measure (and lacks so much important detail). Since x-rationality is about human decision-making, a large part of it is extracting correct decisions out of your native architecture, even if these decisions are applied to formalization of problems in math.
Since real gambles are always also part of the state of the world that one’s utility function is defined over, you also need the moral principle that there shouldn’t be (dis)utility attached to their structure. Decision theory strictly has nothing to say to the person who considers it evil to gamble with lives (operationalized as not taking the choice with the lowest variance in possible outcomes, or whatever), although it’s easy to make it sound like it does. The moral principle here seems intuitive to me, but I have no idea if it is in general. (Something to Protect is the only post I can think of dealing with this.)