Yes, the relative scale of future utility makes no difference in short-term decisions, though noting that short-term to an immortal here can still mean “in the next 10^50 years”!
It might make a difference in the case where someone who thought that they were immortal becomes uncertain of whether what they already experienced was real. That’s the sort of additional problem you get with uncertainty over risk though, not really a problem with bounded utility itself.
Hi, one other problem occurred to me in regards to short term decisions and bounded utility.
Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility. Maybe you’re an immortal altruist who has helped quadrillions of people, maybe you’re an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can’t be changed.
You then confront a Pascal’s Mugger who threatens to inflict a tremendous amount of disutility unless you give the $5. If you’re an altruist they threaten to torture quintillions of people, if you are an egoist they threaten to torture you for a quintillion years, something like that. As with standard Pascal’s mugging, the odds of them be able to carry this threat out are astronomically unlikely.
In this case, it still fells like you ought to ignore the mugger. Does that make sense considering that, even though your bounded utility function assigns less disvalue to such a threat, it also assigns less value to the $5 because you have so much utility already? Plus, if they are able to carry out their threat, they would be able to significantly lower your utility so that it is much “further away from the bound” than it was before. Does it matter that as they push your utility further and further “down” away from the bound, utility becomes “more valuable.”
Or am I completely misunderstanding how bounded utility is calculated? I’ve never seen this specific criticism of bounded utility functions before, and much smarter people than me have studied this issue, so I imagine that I must be? I am not sure exactly how adding utility and subtracting disutility is calculated. It seems like if the immortal altruist whose helped quadrillions of people has a choice between gaining 3 utilons, or inflicting 2 disutilons to gain 5 utilitons, that they should be indifferent between the two, even if they have a ton of utility and very little disutility in their past.
If a Pascal’s Mugger can credibly threaten an entire universe of people with indefinite torture, their promise to never carry out their threat for $5 is more credible than not, and you have good reason to believe that nothing else will work, then seriously we should just pay them. This is true regardless of whether utility is bounded or not.
All of these conditions are required, and all of them are stupid, which is why this answer defies intuition.
If there is no evidence that the mugger is more than an ordinarily powerful person, then the prior credence of their threat is incredibly low, because in this scenario the immortal has observed a universe with ~10^100 lives and none of them were able to do this thing before. What are the odds that this person, now, can do the thing they’re suggesting? I’d suggest lower than 10^-120. Certainly no more than 10^-100 credence on a randomly selected person in the universe would have this power (probably substantially less), and conditional on someone having such power, it’s very unlikely that they could provide no evidence for it.
But even in that tiny conditional, what is the probability that giving them $5 will actually stop them using it? They would have to be not only the universe’s most powerful person, but also one of the the universe’s most incompetent extortionists. What are the odds that the same person has both properties? Even lower still. It seems far more likely that giving them $5 will do nothing positive at all and may encourage them to do more extortion, eventually dooming the universe to hell when someone can’t or won’t pay. The net marginal utility of paying them may well be negative.
There are other actions that seem more likely to succeed, such as convincing them that with enormous power there are almost certainly things they could do for which people would voluntarily pay a great deal more than $5.
But really, the plausibility of this scenario is ridiculously, vastly low to the point where it’s not seriously worth dedicating a single neuron firing to it. The chances are vastly greater that the immortal is hallucinating the entire thing, or that in some other ways the encounter is completely different than it seems. In a lifespan of 10^70 years they have almost certainly encountered many such situations.
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any “Pascal’s Mugging” type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to “have” at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various unchangeable events in the past generated for you? Does everything just rescale when you gain or lose a lot of utility so that the relative value of everything is the same? I expect the answer is going to be “yes” based on our previous discussion, but am a little uncertain because of the various confused thoughts on the subject that I have been having lately.
Full length Comment:
I don’t think I explained my issue clearly. Those arguments about Pascal’s Mugging are addressing it from the perspective of its unlikeliness, rather than using a bounded utility function against it.
I am trying to understand bounded utility functions and I think I am still very confused. What I am confused about right now is how a bounded utility function protects from Pascal’s Mugging at different “points” along the function.
Imagine we have a bounded utility function that has a “S” curve shape. The function goes up and down from 0 and flattens as it approaches the upper and lower bounds.
If someone has utility at around 0, I see how they resist Pascal’s Mugging. Regardless of whether the Mugging is a threat or a reward, it approaches their upper or lower bound and then diminishes. So utility can never “outrace” probability.
But what if they have a level of utility that is close to the upper bound and a Mugger offers a horrible threat? If the Mugger offered a threat that would reduce their utility to 0, would they respond differently than they would to one that would send it all the way to the lower bound? Would the threat get worse as the utility being cancelled out by the disutility got further from the bound and closer to 0? Or is the idea that in order for a threat/reward to qualify as a Pascal’s Mugging it has to be so huge that it goes all the way down to a bound?
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one? I don’t think that is the case, I think that, as you said, “the relative scale of future utility makes no difference in short-term decisions.” But I am confused about how.
I think I am probably just very confused in general about utility functions and about bounded utility functions. While some people have criticized bounded utility functions, I have never come across this specific type of criticism before. It seems far more likely that I am confused than that I am the first person to notice an obvious flaw.
Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one?
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I suspect most of the “scary situations” in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.
Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.
The case of the immortal who becomes uncertain of the reality of their experiences is I think what that “Pascal’s Mugging for Bounded Utilities” article I linked to the the OP was getting at. But it’s a relief to see that it’s just a subset of decisions under uncertainty, rather than a special weird problem.
Yes, the relative scale of future utility makes no difference in short-term decisions, though noting that short-term to an immortal here can still mean “in the next 10^50 years”!
It might make a difference in the case where someone who thought that they were immortal becomes uncertain of whether what they already experienced was real. That’s the sort of additional problem you get with uncertainty over risk though, not really a problem with bounded utility itself.
Hi, one other problem occurred to me in regards to short term decisions and bounded utility.
Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility. Maybe you’re an immortal altruist who has helped quadrillions of people, maybe you’re an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can’t be changed.
You then confront a Pascal’s Mugger who threatens to inflict a tremendous amount of disutility unless you give the $5. If you’re an altruist they threaten to torture quintillions of people, if you are an egoist they threaten to torture you for a quintillion years, something like that. As with standard Pascal’s mugging, the odds of them be able to carry this threat out are astronomically unlikely.
In this case, it still fells like you ought to ignore the mugger. Does that make sense considering that, even though your bounded utility function assigns less disvalue to such a threat, it also assigns less value to the $5 because you have so much utility already? Plus, if they are able to carry out their threat, they would be able to significantly lower your utility so that it is much “further away from the bound” than it was before. Does it matter that as they push your utility further and further “down” away from the bound, utility becomes “more valuable.”
Or am I completely misunderstanding how bounded utility is calculated? I’ve never seen this specific criticism of bounded utility functions before, and much smarter people than me have studied this issue, so I imagine that I must be? I am not sure exactly how adding utility and subtracting disutility is calculated. It seems like if the immortal altruist whose helped quadrillions of people has a choice between gaining 3 utilons, or inflicting 2 disutilons to gain 5 utilitons, that they should be indifferent between the two, even if they have a ton of utility and very little disutility in their past.
If a Pascal’s Mugger can credibly threaten an entire universe of people with indefinite torture, their promise to never carry out their threat for $5 is more credible than not, and you have good reason to believe that nothing else will work, then seriously we should just pay them. This is true regardless of whether utility is bounded or not.
All of these conditions are required, and all of them are stupid, which is why this answer defies intuition.
If there is no evidence that the mugger is more than an ordinarily powerful person, then the prior credence of their threat is incredibly low, because in this scenario the immortal has observed a universe with ~10^100 lives and none of them were able to do this thing before. What are the odds that this person, now, can do the thing they’re suggesting? I’d suggest lower than 10^-120. Certainly no more than 10^-100 credence on a randomly selected person in the universe would have this power (probably substantially less), and conditional on someone having such power, it’s very unlikely that they could provide no evidence for it.
But even in that tiny conditional, what is the probability that giving them $5 will actually stop them using it? They would have to be not only the universe’s most powerful person, but also one of the the universe’s most incompetent extortionists. What are the odds that the same person has both properties? Even lower still. It seems far more likely that giving them $5 will do nothing positive at all and may encourage them to do more extortion, eventually dooming the universe to hell when someone can’t or won’t pay. The net marginal utility of paying them may well be negative.
There are other actions that seem more likely to succeed, such as convincing them that with enormous power there are almost certainly things they could do for which people would voluntarily pay a great deal more than $5.
But really, the plausibility of this scenario is ridiculously, vastly low to the point where it’s not seriously worth dedicating a single neuron firing to it. The chances are vastly greater that the immortal is hallucinating the entire thing, or that in some other ways the encounter is completely different than it seems. In a lifespan of 10^70 years they have almost certainly encountered many such situations.
TLDR: What I really want to know is:
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any “Pascal’s Mugging” type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to “have” at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various unchangeable events in the past generated for you? Does everything just rescale when you gain or lose a lot of utility so that the relative value of everything is the same? I expect the answer is going to be “yes” based on our previous discussion, but am a little uncertain because of the various confused thoughts on the subject that I have been having lately.
Full length Comment:
I don’t think I explained my issue clearly. Those arguments about Pascal’s Mugging are addressing it from the perspective of its unlikeliness, rather than using a bounded utility function against it.
I am trying to understand bounded utility functions and I think I am still very confused. What I am confused about right now is how a bounded utility function protects from Pascal’s Mugging at different “points” along the function.
Imagine we have a bounded utility function that has a “S” curve shape. The function goes up and down from 0 and flattens as it approaches the upper and lower bounds.
If someone has utility at around 0, I see how they resist Pascal’s Mugging. Regardless of whether the Mugging is a threat or a reward, it approaches their upper or lower bound and then diminishes. So utility can never “outrace” probability.
But what if they have a level of utility that is close to the upper bound and a Mugger offers a horrible threat? If the Mugger offered a threat that would reduce their utility to 0, would they respond differently than they would to one that would send it all the way to the lower bound? Would the threat get worse as the utility being cancelled out by the disutility got further from the bound and closer to 0? Or is the idea that in order for a threat/reward to qualify as a Pascal’s Mugging it has to be so huge that it goes all the way down to a bound?
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one? I don’t think that is the case, I think that, as you said, “the relative scale of future utility makes no difference in short-term decisions.” But I am confused about how.
I think I am probably just very confused in general about utility functions and about bounded utility functions. While some people have criticized bounded utility functions, I have never come across this specific type of criticism before. It seems far more likely that I am confused than that I am the first person to notice an obvious flaw.
Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.
Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.
The case of the immortal who becomes uncertain of the reality of their experiences is I think what that “Pascal’s Mugging for Bounded Utilities” article I linked to the the OP was getting at. But it’s a relief to see that it’s just a subset of decisions under uncertainty, rather than a special weird problem.