On the one hand, you are correct regarding philosophy for humans: we do ethics and meta-ethics to reduce our uncertainty about our utility functions, not as a kind of game-tree planning based on already knowing those functions.
On the other hand, the Von-Neumann-Morgenstern Theorem says blah blah blah blah.
On the third hand, if you have a mathematical structure we can use to make no-Dutch-book decisions that better models the kinds of uncertainty we deal with as embodied human beings in real life, I’m all ears.
I don’t think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don’t trust.
Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it’s possible trying to figure out one’s utility function can help prevent eg anxious looping behavior.
Sure, if you’re right about your utility function. The failure mode I’m worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.
The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.
That said, I concur with your basic point: universal overarching utility functions—not just small ones for a given situation, but a single large one for you as a human—are something humans don’t, and I think can’t, do—and realising how mathematically helpful it would be if they did still doesn’t mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.
(And, I suspect, will merely leave you vulnerable to everyday human-level exploits—remember that the actual threat model we evolved in is beating other humans, and as long as we’re dealing with humans we need to deal with humans.)
The idea is that the universe offers you Dutch-book situations
But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn’t constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn’t becoming perfect Bayesians.
I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.
You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we’re going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.
Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won’t hurt themselves.
I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn’t there something rather severely bad about that?
Because it’s a desideratum indicating that my preferences contain or don’t contain an unconditional and internal contradiction, something that would screw me over eventually no matter what possible world I land in.
Yes, that’s right. We lack knowledge of the total set of concerns which move us, and the ordering among those of which move us more. Had we total knowledge of this, we would have no need for any such thing as “ethics” or “meta-ethics”, and would simply view our preferences and decision concerns in their full form, use our reason to transform them into a coherent ordering over possible worlds, and act according to that ordering. This sounds strange and alien because I’m using meta-language rather than object-language, but in real life it would mostly mean just having a perfectly noncontradictory way of weighing things like love or roller-skating or reading that would always output a definite way to end up happy and satisfied.
However, we were built by evolution rather than a benevolent mathematician-god, so instead we have various modes of thought-experiment and intuition-pump designed to help us reduce our uncertainty about our own nature.
On the one hand, you are correct regarding philosophy for humans: we do ethics and meta-ethics to reduce our uncertainty about our utility functions, not as a kind of game-tree planning based on already knowing those functions.
On the other hand, the Von-Neumann-Morgenstern Theorem says blah blah blah blah.
On the third hand, if you have a mathematical structure we can use to make no-Dutch-book decisions that better models the kinds of uncertainty we deal with as embodied human beings in real life, I’m all ears.
I don’t think Dutch book arguments matter in practice. An easy way to avoid being Dutch booked is to refuse bets being offered to you by people you don’t trust.
Not that I fully support utility functions as a useful concept, but having a consistent one also keeps you from dutch booking yourself. You can interpret any decision as a bet using utility and people often make decisions that cost them effort and energy but leave them in the same place where they started. So it’s possible trying to figure out one’s utility function can help prevent eg anxious looping behavior.
Sure, if you’re right about your utility function. The failure mode I’m worried about is people believing they know what their utility function is and being wrong, maybe disastrously wrong. Consistency is not a virtue if, in reaching for consistency, you make yourself consistent in the wrong direction. Inconsistency can be a hedge against making extremely bad decisions.
The idea is that the universe offers you Dutch-book situations and you make and take bets on uncertain outcomes implicitly.
That said, I concur with your basic point: universal overarching utility functions—not just small ones for a given situation, but a single large one for you as a human—are something humans don’t, and I think can’t, do—and realising how mathematically helpful it would be if they did still doesn’t mean they can, and trying to turn oneself into an expected utility maximiser is unlikely to work.
(And, I suspect, will merely leave you vulnerable to everyday human-level exploits—remember that the actual threat model we evolved in is beating other humans, and as long as we’re dealing with humans we need to deal with humans.)
But does it in fact do that? To the extent that you believe that humans are bad Bayesians, you believe that the environment in which humans evolved wasn’t constantly Dutch-booking them, or that if it was then humans evolved some defense against this which isn’t becoming perfect Bayesians.
I do suspect that our thousand shards of desire being contradictory and not resolving is selected for, in that we are thus money-pumped into propagating our genes.
You are of course correct about the concrete scenario of being Dutch Booked in a hypothetical gamble (and I am not a gambler for reasons similar to this: we all know the house always wins!). However, if we’re going to discard the Dutch Book criterion, then we need to replace it with some other desiderata for preventing self-contradictory preferences that cause no-win scenarios.
Even if your own mind comes preprogrammed with decision-making algorithms that can go into no-win scenarios under some conditions, you should recognize those as a conscious self-patching human being, and consciously employ other algorithms that won’t hurt themselves.
I mean, let me put it this way, probabilities aside, if you make decisions that form a cyclic preference ordering rather than even forming a partial ordering, isn’t there something rather severely bad about that?
Why?
Do you want to program an agent to put you in a no-win scenario? Do you want to put yourself in a no-win scenario?
Why do you care so much about Dutch booking relative to the myriad other considerations one might care about?
Because it’s a desideratum indicating that my preferences contain or don’t contain an unconditional and internal contradiction, something that would screw me over eventually no matter what possible world I land in.
ITYM a desideratum.
On the fourth hand, we do ethics and metaethics to extrapolate better ethics.
Yes, that’s right. We lack knowledge of the total set of concerns which move us, and the ordering among those of which move us more. Had we total knowledge of this, we would have no need for any such thing as “ethics” or “meta-ethics”, and would simply view our preferences and decision concerns in their full form, use our reason to transform them into a coherent ordering over possible worlds, and act according to that ordering. This sounds strange and alien because I’m using meta-language rather than object-language, but in real life it would mostly mean just having a perfectly noncontradictory way of weighing things like love or roller-skating or reading that would always output a definite way to end up happy and satisfied.
However, we were built by evolution rather than a benevolent mathematician-god, so instead we have various modes of thought-experiment and intuition-pump designed to help us reduce our uncertainty about our own nature.