Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one?
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I suspect most of the “scary situations” in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.
Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.