I don’t like utility theory at all except for making small fairly immediate choices, it is too much like the old joke about the physicist who says, “Assume a spherical cow...”. If anyone could direct me to something that isn’t vague and handwavey about converting real goals and desires to “utils” I would be interested. Until then, I am getting really tired of it.
In the same way, it’s hopeless to try to assign probabilities to events and do a Bayesian update on everything. But you can still take advice from theorems like “Conservation of expected evidence” and the like. Formalisations might not be good for specifics, but they’re good for telling you if you’re going wrong in some more general manner.
I believe von Neumann and Morganstern showed that you could ask people questions about ordinal preferences (would you prefer x to y) and from a number of such questions (if they’re consistent), construct cardinal preferences—which would be turning real goals and desires into utils.
Haven’t various psychological experiments shown that such self-reported preferences are usually inconsistent? (I’ve seen various refs and examples here on LW, although I can’t remember one offhand...)
Oh, sure. (Eliezer has a post on specific human inconsistencies from the OB days.) But this is a theoretical result, saying we can go from specific choices - ‘revealed preferences’ - to a utility function/set of cardinal preferences which will satisfy those choices, if those choices are somewhat rational. Which is exactly what billswift asked for.
(And I’d note the issue here is not what do humans actually use when assessing small probabilities, but what they should do. If we scrap expected utility, it’s not clear what the right thing is; which is what my other comment is about.)
I don’t like utility theory at all except for making small fairly immediate choices, it is too much like the old joke about the physicist who says, “Assume a spherical cow...”. If anyone could direct me to something that isn’t vague and handwavey about converting real goals and desires to “utils” I would be interested. Until then, I am getting really tired of it.
In the same way, it’s hopeless to try to assign probabilities to events and do a Bayesian update on everything. But you can still take advice from theorems like “Conservation of expected evidence” and the like. Formalisations might not be good for specifics, but they’re good for telling you if you’re going wrong in some more general manner.
I believe von Neumann and Morganstern showed that you could ask people questions about ordinal preferences (would you prefer x to y) and from a number of such questions (if they’re consistent), construct cardinal preferences—which would be turning real goals and desires into utils.
Haven’t various psychological experiments shown that such self-reported preferences are usually inconsistent? (I’ve seen various refs and examples here on LW, although I can’t remember one offhand...)
Oh, sure. (Eliezer has a post on specific human inconsistencies from the OB days.) But this is a theoretical result, saying we can go from specific choices - ‘revealed preferences’ - to a utility function/set of cardinal preferences which will satisfy those choices, if those choices are somewhat rational. Which is exactly what billswift asked for.
(And I’d note the issue here is not what do humans actually use when assessing small probabilities, but what they should do. If we scrap expected utility, it’s not clear what the right thing is; which is what my other comment is about.)