Because this person’s utility function is bounded, y odds of 0.01x utility is worth a penny to them, even though y odds of x utility is not worth a dollar to them.
So you’re talking about cases where (for example) the utility of winning is 1000, the marginal utility of winning 1/100th as much is 11, and this makes it more worthwhile to buy a partial ticket for a penny when it’s not worthwhile to buy a full ticket for a dollar?
To me this sounds more like any non-linear utility, not specifically bounded utility.
The person buys a ticket for a penny. Then they are offered a chance to buy another. Because they are using REA, they only count the difference in utility from buying the new ticket, and do not count the ticket they already have, so they buy another.
No. REA still compares utilities of outcomes, it just does subtraction before averaging over outcomes instead of comparison after.
Specifically, the four outcomes being compared are: spend $0.01 then win 0.01x (with probability y), spend $0.01 then lose (probability 1-y), spend $0.02 then win 0.02x (y), spend $0.02 then lose (1-y).
The usual utility calculation is to buy another ticket when
y U(spend $0.02 then win 0.02x) + (1-y) U(spend $0.02 then lose) > y U(spend $0.01 then win 0.01x) + (1-y) U(spend $0.01 and lose).
REA changes this only very slightly. It says to buy another ticket when
y (U(spend $0.02 then win 0.02x) - U(spend $0.01 then win 0.01x)) + (1-y) (U(spend $0.02 then lose) - U(spend $0.01 then lose)) > 0.
In any finite example, it’s easy to prove that they’re identical. There is a difference only when there are infinitely many outcomes and the sums on the LHS and RHS of the usual computation don’t converge. In some cases, the REArranged sum converges.
There is no difference at all with anyone who has a bounded utility function. The averaging over outcomes always produces a finite result in that case, so the two approaches are identical.
Thanks a lot for the reply. That makes a lot of sense and puts my mind more at ease.
To me this sounds more like any non-linear utility, not specifically bounded utility.
You’re probably right, a lot of my math is shaky. Let me try to explain the genesis of the example I used. I was trying to test REA for transitivity problems because I thought that it might have some further advantages to conventional theories. In particular, it seemed to me that by subtracting before averaging, REA could avoid the two examples those articles I references:
1. The total utilitarian with a bounded utility function who needs to research how many happy people lived in ancient Egypt to establish how “close to the bound” they were and therefore how much they should discount future utility.
2. The very long lived egoist with a bounded utility function who vulnerable to Pascal’s mugging because they are unsure of how many happy years they have lived already (and therefore how “close to the bound” they were).
It seemed like REA, by subtracting past utility that they cannot change before doing the calculation, could avoid both those problems. I do not know if those are real problems or if a non-linear/bounded utility with a correctly calibrated discount rate could avoid them anyway, but it seemed worthwhile to find ways around them. But I was really worried that REA might create intransitivity issues with bounded utility functions, the lottery example I was using was an example of the kind of intransitivity problem that I was thinking of.
It also occurred to me that REA might avoid another peril of bounded utility functions that I read about in this article. Here is the relevant quote:
“if you have a bounded utility function and were presented with the following scary situation: “Heads, 1 day of happiness for you, tails, everyone is tortured for a trillion days” you would (if given the opportunity) increase the stakes, preferring the following situation: “Heads, 2 days of happiness for you, tails, everyone is tortured forever. (This particular example wouldn’t work for all bounded utility functions, of course, but something of similar structure would.)”
It seems like REA might be able to avoid that. If we imagine that the person is given a choice between two coins, since they have to pick one, the “one day of happiness+trillion days of torture” is subtracted beforehand, so all the person needs to do is weigh the difference. Even if we get rid of the additional complications of computing infinity that “tortured forever” creates, by replacing it with some larger number like “2 trillion days”, I think it might avoid it.
But I might be wrong about that, especially if REA always gives the same answers in finite situations. If that’s the case it just might be better to find a formulation of an unbounded utility function that does its best to avoid Pascal’s Mugging and also the “scary situations” from the article, even if it does it imperfectly.
Unfortunately REA doesn’t change anything at all for bounded utility functions. It only makes any difference for unbounded ones. I don’t get the “long lived egoist” example at all. It looks like it drags in a whole bunch of other stuff like path-dependence and lived experience versus base reality to confound basic questions about bounded versus unbounded utility.
I suspect most of the “scary situations” in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.
REA doesn’t help at all there, though. You’re still computing U(2X days of torture) - U(X days of torture) which can be made as close to zero as you like for large enough X if your utility function is monotonic in X and bounded below.
REA doesn’t help at all there, though. You’re still computing U(2X days of torture) - U(X days of torture)
I think I see my mistake now, I was treating a bounded utility function using REA as subtracting the “unbounded” utilities of the two choices and then comparing the post-subtraction results using the bounded utility function. It looks like you are supposed to judge each one’s utility by the bounded function before subtracting them.
Unfortunately REA doesn’t change anything at all for bounded utility functions. It only makes any difference for unbounded ones.
That’s unfortunate. I was really hoping that it could deal with the Egyptology scenario by subtracting the unknown utility value of Ancient Egypt and only comparing the difference in utility between the two scenarios. That way the total utilitarian (or some other type of altruist) with a bounded utility function would not need to research how much utility the people of Ancient Egypt had in order to know how good adding happy people to the present day world is. That just seems insanely counterintuitive.
I suppose there might be some other way around the Egyptology issue. Maybe if you have a bounded or nonlinear utility function that is sloped at the correct rate it will give the same answer regardless of how happy the Ancient Egyptians were. If they were super happy then the value of whatever good you do in the present is in some sense reduced. But the value of whatever resources you would sacrifice in order to do good is reduced as well, so it all evens out. Similarly, if they weren’t that happy, the value of the good you do is increased, but the value of whatever you sacrifice in order to do that good is increased proportionately. So a utilitarian can go ahead and ignore how happy the ancient Egyptians were when doing their calculations.
It seems like this might work if the bounded function has adding happy lives have diminishing returns at a reasonably steady and proportional rate (but not so steady that it is effectively unbounded and can be Pascal’s Mugged).
With the “long lived egoist” example I was trying to come up with a personal equivalent to the Egyptology problem. In the Egyptology problem, a utilitarian does not know how close they are to the “bound” of their bounded utility function because they do not know how happy the ancient Egyptians were. In the long lived egoist example, they do not know how close to the bound they are because they don’t know exactly how happy and long lived their past self was. It also seems insanely counterintuitive to say that, if you have a bounded utility function, you need to figure out exactly how happy you were as a child in order to figure out how good it is for you to be happy in the future. Again, I wonder if a solution might be to have a bounded utility function with returns that diminish at a steady and proportional rate.
I really still don’t know what you mean by “knowing how close to the bound you are”. Utility functions are just abstractions over preferences that satisfy some particular consistency properties. If the happiness of Ancient Egyptians doesn’t affect your future preferences, then they don’t have any role in your utility function over future actions regardless of whether it’s bounded or not.
I really still don’t know what you mean by “knowing how close to the bound you are”.
What I mean is, if I have a bounded utility function where there is some value, X, and (because the function is bounded) X diminishes in value the more of it there is, what if I don’t know how much X there is?
For example, suppose I have a strong altruistic preference that the universe have lots of happy people. This preference is not restricted by time and space, it counts the existence of happy people as a good thing regardless of where or when they exist. This preference is also agent neutral, it does not matter whether I, personally, am responsible for those people existing and being happy, it is good regardless. This preference is part of a bounded utility function, so adding more happy people starts to have diminishing returns the closer one gets to a certain bound. This allows me to avoid Pascal’s Mugging.
However, if adding more people has diminishing returns because the function is bounded, and my preference is not restricted by time, space, or agency, that means that I have no way of knowing what those diminishing returns are unless I know how many happy people have ever existed in the universe. If there are diminishing returns based on how many people there are, total, in the universe, then the value of adding more people in the future might change depending on how many people existed in the past.
That is what I mean by “knowing how close to the bound” I am. If I value some “X”, what if it isn’t possible to know how much X there is? (like I said before, a version of this for egoistic preferences might be if the X is happiness over your lifetime, and you don’t know how much X there is because you have amnesia or something).
I was hoping that I might be able to fix this issue by making a bounded utility function where X diminishes in value smoothly and proportionately. So a million happy people in ancient Egypt has proportional diminishing returns to a billion and so on. So when I am making choices about maximizing X in the present, the amount of X I get is diminished in value, but it is proportionately diminished, so the decisions that I make remain the same. If there was a vast population in the past, the amount of X I can generate has very small value according to a bounded utility function. But that doesn’t matter because it’s all that I can do.
That way, even if X decreases in value the more of it there is, it will not effect any choices I make where I need to choose between different probabilities of getting different amounts of X in the future.
I suppose I could also solve it by making all of my preferences agent-relative instead of agent-neutral, but I would like to avoid that. Like most people I have a strong moral intuition that my altruistic preferences should be agent-neutral. I suppose it might also get me into conflict with other agents with bounded agent-relative utility functions if we value the same act differently.
If I am explaining this idea poorly, let me try directing you to some of the papers I am referencing. Besides the one I mentioned in the OP, there is this one by Beckstead and Thomas (pages 16, 17, and 18 are where it discusses it).
This whole idea seems to be utterly divorced from what utility means. Fundamentally, utility is based on an ordering of preferences over outcomes. It makes sense to say that you don’t know what the actual outcomes will be, that’s part of decision under risk. It even makes sense to say that you don’t know much about the distribution of outcomes, that’s decision under uncertainty.
The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying “I don’t know what the distribution of outcomes will be”, it’s phrased as “I don’t know what my utility function is”.
I think things will be much clearer when phrased in terms of decision making under uncertainty: “I know what my utility function is, but I don’t know what the probability distribution of outcomes is”.
The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying “I don’t know what the distribution of outcomes will be”, it’s phrased as “I don’t know what my utility function is”.
I think part of it is that I am conflating two different parts of the Egyptology problem. One part is uncertainty: it isn’t possible to know certain facts about the welfare of Ancient Egyptians that might affect how “close to the bound” you are. The other part is that most people have a strong intuition that those facts aren’t relevant to our decisions, whether we are certain of them or not. But there’s this argument that those facts are relevant if you have an altruistic bounded utility function because they affect how much diminishing returns your function has.
For example, I can imagine that if I was an altruistic immortal who was alive during ancient Egypt, I might be unwilling to trade a certainty of a good outcome in ancient Egypt for an uncertain amazingly terrific outcome in the far future because of my bounded utility function. That’s all good, it should help me avoid Pascal’s Mugging. But once I’ve lived until the present day, it feels like I should continue acting the same way I did in the past, continue to be altruistic, but in a bounded fashion. It doesn’t feel like I should conclude that, because of my achievements as an altruist in Ancient Egypt, that there is less value to being an altruist in the present day.
In the case of the immortal, I do have all the facts about Ancient Egypt, but they don’t seem relevant to what I am doing now. But in the past, in Egypt, I was unwilling to trade certain good outcomes for uncertain terrific ones because my bounded utility function meant I didn’t value the larger ones linearly. Now that the events of Egypt are in the past and can’t be changed, does that mean I value everything less? Does it matter if I do, if the decrease in value is proportionate? If I treat altruism in the present day as valuable, does that contradict the fact that I discounted that same value back in Ancient Egypt?
I think that’s why I’m phrasing it as being uncertain of what my utility function is. It feels like if I have a bounded utility function, I should be unwilling (within limits) to trade a sure thing for a small possibility of vast utility, thereby avoiding Pascal’s Mugging and similar problems. But it also feels like, once I have that sure thing, and the fact that I have it cannot be changed, I should be able to continue seeking more utility, and how many sure things I have accumulated in the past should not change that.
Yes, splitting the confounding factors out does help. There still seem to be a few misconceptions and confounding things though.
One is that bounded doesn’t mean small. On a scale where the welfare of the entire civilization of Ancient Egypt counts for 1 point of utility, the bound might still be more than 10^100.
Yes, this does imply that after 10^70 years of civilizations covering 10^30 planet-equivalents, the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they’re very altruistic.
the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they’re very altruistic.
Ok, thanks, I get that now, I appreciate your help. The thing I am really wondering is, does this make any difference at all to how that immortal would make decisions once Ancient Egypt is in the past and cannot be changed? Assuming that they have one of those bounded utility functions where their utility is asymptotic to the bound, but never actually reaches it, I don’t feel like it necessarily would.
If Ancient Egypt is in the past and can’t be changed, the immortal might, in some kind of abstract sense, value that randomly selected planet of those 10^30 worlds less than they valued Egypt. But if they are actually in a situation where they are on that random planet, and need to make altruistic decisions about helping the people on that planet, then their decisions shouldn’t really be affected. Even if the welfare of that planet is less valuable to them than the welfare of Ancient Egypt, that shouldn’t matter if their decisions don’t affect Ancient Egypt and only affect the planet. They would be trading less valuable welfare off against other less valuable welfare, so it would even out. Since their utility function is asymptotic to the bound, they would still act to increase their utility, even if the amount of utility they can generate is very small.
I am totally willing to accept the Egyptology argument if all it is saying is that past events that cannot be changed might affect the value of present-day events in some abstract sense (at least if you have a bounded utility function). Where I have trouble accepting it is if those same unchangeable past events might significantly affect what choices you have to make about future events that you can change. If future welfare is only 0.1x as valuable as past welfare, that doesn’t really matter, because future welfare is the only welfare you are able to affect. If it’s only possible to make a tiny difference, then you might as well try, because a tiny difference is better than no difference. The only time when the tininess seems relevant to decisions is Pascal’s Mugging type scenarios where one decision can generate tiny possibilities of huge utility.
Yes, the relative scale of future utility makes no difference in short-term decisions, though noting that short-term to an immortal here can still mean “in the next 10^50 years”!
It might make a difference in the case where someone who thought that they were immortal becomes uncertain of whether what they already experienced was real. That’s the sort of additional problem you get with uncertainty over risk though, not really a problem with bounded utility itself.
Hi, one other problem occurred to me in regards to short term decisions and bounded utility.
Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility. Maybe you’re an immortal altruist who has helped quadrillions of people, maybe you’re an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can’t be changed.
You then confront a Pascal’s Mugger who threatens to inflict a tremendous amount of disutility unless you give the $5. If you’re an altruist they threaten to torture quintillions of people, if you are an egoist they threaten to torture you for a quintillion years, something like that. As with standard Pascal’s mugging, the odds of them be able to carry this threat out are astronomically unlikely.
In this case, it still fells like you ought to ignore the mugger. Does that make sense considering that, even though your bounded utility function assigns less disvalue to such a threat, it also assigns less value to the $5 because you have so much utility already? Plus, if they are able to carry out their threat, they would be able to significantly lower your utility so that it is much “further away from the bound” than it was before. Does it matter that as they push your utility further and further “down” away from the bound, utility becomes “more valuable.”
Or am I completely misunderstanding how bounded utility is calculated? I’ve never seen this specific criticism of bounded utility functions before, and much smarter people than me have studied this issue, so I imagine that I must be? I am not sure exactly how adding utility and subtracting disutility is calculated. It seems like if the immortal altruist whose helped quadrillions of people has a choice between gaining 3 utilons, or inflicting 2 disutilons to gain 5 utilitons, that they should be indifferent between the two, even if they have a ton of utility and very little disutility in their past.
If a Pascal’s Mugger can credibly threaten an entire universe of people with indefinite torture, their promise to never carry out their threat for $5 is more credible than not, and you have good reason to believe that nothing else will work, then seriously we should just pay them. This is true regardless of whether utility is bounded or not.
All of these conditions are required, and all of them are stupid, which is why this answer defies intuition.
If there is no evidence that the mugger is more than an ordinarily powerful person, then the prior credence of their threat is incredibly low, because in this scenario the immortal has observed a universe with ~10^100 lives and none of them were able to do this thing before. What are the odds that this person, now, can do the thing they’re suggesting? I’d suggest lower than 10^-120. Certainly no more than 10^-100 credence on a randomly selected person in the universe would have this power (probably substantially less), and conditional on someone having such power, it’s very unlikely that they could provide no evidence for it.
But even in that tiny conditional, what is the probability that giving them $5 will actually stop them using it? They would have to be not only the universe’s most powerful person, but also one of the the universe’s most incompetent extortionists. What are the odds that the same person has both properties? Even lower still. It seems far more likely that giving them $5 will do nothing positive at all and may encourage them to do more extortion, eventually dooming the universe to hell when someone can’t or won’t pay. The net marginal utility of paying them may well be negative.
There are other actions that seem more likely to succeed, such as convincing them that with enormous power there are almost certainly things they could do for which people would voluntarily pay a great deal more than $5.
But really, the plausibility of this scenario is ridiculously, vastly low to the point where it’s not seriously worth dedicating a single neuron firing to it. The chances are vastly greater that the immortal is hallucinating the entire thing, or that in some other ways the encounter is completely different than it seems. In a lifespan of 10^70 years they have almost certainly encountered many such situations.
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any “Pascal’s Mugging” type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to “have” at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various unchangeable events in the past generated for you? Does everything just rescale when you gain or lose a lot of utility so that the relative value of everything is the same? I expect the answer is going to be “yes” based on our previous discussion, but am a little uncertain because of the various confused thoughts on the subject that I have been having lately.
Full length Comment:
I don’t think I explained my issue clearly. Those arguments about Pascal’s Mugging are addressing it from the perspective of its unlikeliness, rather than using a bounded utility function against it.
I am trying to understand bounded utility functions and I think I am still very confused. What I am confused about right now is how a bounded utility function protects from Pascal’s Mugging at different “points” along the function.
Imagine we have a bounded utility function that has a “S” curve shape. The function goes up and down from 0 and flattens as it approaches the upper and lower bounds.
If someone has utility at around 0, I see how they resist Pascal’s Mugging. Regardless of whether the Mugging is a threat or a reward, it approaches their upper or lower bound and then diminishes. So utility can never “outrace” probability.
But what if they have a level of utility that is close to the upper bound and a Mugger offers a horrible threat? If the Mugger offered a threat that would reduce their utility to 0, would they respond differently than they would to one that would send it all the way to the lower bound? Would the threat get worse as the utility being cancelled out by the disutility got further from the bound and closer to 0? Or is the idea that in order for a threat/reward to qualify as a Pascal’s Mugging it has to be so huge that it goes all the way down to a bound?
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one? I don’t think that is the case, I think that, as you said, “the relative scale of future utility makes no difference in short-term decisions.” But I am confused about how.
I think I am probably just very confused in general about utility functions and about bounded utility functions. While some people have criticized bounded utility functions, I have never come across this specific type of criticism before. It seems far more likely that I am confused than that I am the first person to notice an obvious flaw.
Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one?
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I suspect most of the “scary situations” in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.
Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.
The case of the immortal who becomes uncertain of the reality of their experiences is I think what that “Pascal’s Mugging for Bounded Utilities” article I linked to the the OP was getting at. But it’s a relief to see that it’s just a subset of decisions under uncertainty, rather than a special weird problem.
So you’re talking about cases where (for example) the utility of winning is 1000, the marginal utility of winning 1/100th as much is 11, and this makes it more worthwhile to buy a partial ticket for a penny when it’s not worthwhile to buy a full ticket for a dollar?
To me this sounds more like any non-linear utility, not specifically bounded utility.
No. REA still compares utilities of outcomes, it just does subtraction before averaging over outcomes instead of comparison after.
Specifically, the four outcomes being compared are: spend $0.01 then win 0.01x (with probability y), spend $0.01 then lose (probability 1-y), spend $0.02 then win 0.02x (y), spend $0.02 then lose (1-y).
The usual utility calculation is to buy another ticket when
y U(spend $0.02 then win 0.02x) + (1-y) U(spend $0.02 then lose) > y U(spend $0.01 then win 0.01x) + (1-y) U(spend $0.01 and lose).
REA changes this only very slightly. It says to buy another ticket when
y (U(spend $0.02 then win 0.02x) - U(spend $0.01 then win 0.01x)) + (1-y) (U(spend $0.02 then lose) - U(spend $0.01 then lose)) > 0.
In any finite example, it’s easy to prove that they’re identical. There is a difference only when there are infinitely many outcomes and the sums on the LHS and RHS of the usual computation don’t converge. In some cases, the REArranged sum converges.
There is no difference at all with anyone who has a bounded utility function. The averaging over outcomes always produces a finite result in that case, so the two approaches are identical.
Thanks a lot for the reply. That makes a lot of sense and puts my mind more at ease.
You’re probably right, a lot of my math is shaky. Let me try to explain the genesis of the example I used. I was trying to test REA for transitivity problems because I thought that it might have some further advantages to conventional theories. In particular, it seemed to me that by subtracting before averaging, REA could avoid the two examples those articles I references:
1. The total utilitarian with a bounded utility function who needs to research how many happy people lived in ancient Egypt to establish how “close to the bound” they were and therefore how much they should discount future utility.
2. The very long lived egoist with a bounded utility function who vulnerable to Pascal’s mugging because they are unsure of how many happy years they have lived already (and therefore how “close to the bound” they were).
It seemed like REA, by subtracting past utility that they cannot change before doing the calculation, could avoid both those problems. I do not know if those are real problems or if a non-linear/bounded utility with a correctly calibrated discount rate could avoid them anyway, but it seemed worthwhile to find ways around them. But I was really worried that REA might create intransitivity issues with bounded utility functions, the lottery example I was using was an example of the kind of intransitivity problem that I was thinking of.
It also occurred to me that REA might avoid another peril of bounded utility functions that I read about in this article. Here is the relevant quote:
It seems like REA might be able to avoid that. If we imagine that the person is given a choice between two coins, since they have to pick one, the “one day of happiness+trillion days of torture” is subtracted beforehand, so all the person needs to do is weigh the difference. Even if we get rid of the additional complications of computing infinity that “tortured forever” creates, by replacing it with some larger number like “2 trillion days”, I think it might avoid it.
But I might be wrong about that, especially if REA always gives the same answers in finite situations. If that’s the case it just might be better to find a formulation of an unbounded utility function that does its best to avoid Pascal’s Mugging and also the “scary situations” from the article, even if it does it imperfectly.
Unfortunately REA doesn’t change anything at all for bounded utility functions. It only makes any difference for unbounded ones. I don’t get the “long lived egoist” example at all. It looks like it drags in a whole bunch of other stuff like path-dependence and lived experience versus base reality to confound basic questions about bounded versus unbounded utility.
I suspect most of the “scary situations” in these sorts of theories are artefacts of trying to formulate simplified situations to test specific principles, but accidentally throw out all the things that make utility functions a reasonable approximation to preference ordering. The quoted example definitely fits that description.
REA doesn’t help at all there, though. You’re still computing U(2X days of torture) - U(X days of torture) which can be made as close to zero as you like for large enough X if your utility function is monotonic in X and bounded below.
I think I see my mistake now, I was treating a bounded utility function using REA as subtracting the “unbounded” utilities of the two choices and then comparing the post-subtraction results using the bounded utility function. It looks like you are supposed to judge each one’s utility by the bounded function before subtracting them.
That’s unfortunate. I was really hoping that it could deal with the Egyptology scenario by subtracting the unknown utility value of Ancient Egypt and only comparing the difference in utility between the two scenarios. That way the total utilitarian (or some other type of altruist) with a bounded utility function would not need to research how much utility the people of Ancient Egypt had in order to know how good adding happy people to the present day world is. That just seems insanely counterintuitive.
I suppose there might be some other way around the Egyptology issue. Maybe if you have a bounded or nonlinear utility function that is sloped at the correct rate it will give the same answer regardless of how happy the Ancient Egyptians were. If they were super happy then the value of whatever good you do in the present is in some sense reduced. But the value of whatever resources you would sacrifice in order to do good is reduced as well, so it all evens out. Similarly, if they weren’t that happy, the value of the good you do is increased, but the value of whatever you sacrifice in order to do that good is increased proportionately. So a utilitarian can go ahead and ignore how happy the ancient Egyptians were when doing their calculations.
It seems like this might work if the bounded function has adding happy lives have diminishing returns at a reasonably steady and proportional rate (but not so steady that it is effectively unbounded and can be Pascal’s Mugged).
With the “long lived egoist” example I was trying to come up with a personal equivalent to the Egyptology problem. In the Egyptology problem, a utilitarian does not know how close they are to the “bound” of their bounded utility function because they do not know how happy the ancient Egyptians were. In the long lived egoist example, they do not know how close to the bound they are because they don’t know exactly how happy and long lived their past self was. It also seems insanely counterintuitive to say that, if you have a bounded utility function, you need to figure out exactly how happy you were as a child in order to figure out how good it is for you to be happy in the future. Again, I wonder if a solution might be to have a bounded utility function with returns that diminish at a steady and proportional rate.
I really still don’t know what you mean by “knowing how close to the bound you are”. Utility functions are just abstractions over preferences that satisfy some particular consistency properties. If the happiness of Ancient Egyptians doesn’t affect your future preferences, then they don’t have any role in your utility function over future actions regardless of whether it’s bounded or not.
What I mean is, if I have a bounded utility function where there is some value, X, and (because the function is bounded) X diminishes in value the more of it there is, what if I don’t know how much X there is?
For example, suppose I have a strong altruistic preference that the universe have lots of happy people. This preference is not restricted by time and space, it counts the existence of happy people as a good thing regardless of where or when they exist. This preference is also agent neutral, it does not matter whether I, personally, am responsible for those people existing and being happy, it is good regardless. This preference is part of a bounded utility function, so adding more happy people starts to have diminishing returns the closer one gets to a certain bound. This allows me to avoid Pascal’s Mugging.
However, if adding more people has diminishing returns because the function is bounded, and my preference is not restricted by time, space, or agency, that means that I have no way of knowing what those diminishing returns are unless I know how many happy people have ever existed in the universe. If there are diminishing returns based on how many people there are, total, in the universe, then the value of adding more people in the future might change depending on how many people existed in the past.
That is what I mean by “knowing how close to the bound” I am. If I value some “X”, what if it isn’t possible to know how much X there is? (like I said before, a version of this for egoistic preferences might be if the X is happiness over your lifetime, and you don’t know how much X there is because you have amnesia or something).
I was hoping that I might be able to fix this issue by making a bounded utility function where X diminishes in value smoothly and proportionately. So a million happy people in ancient Egypt has proportional diminishing returns to a billion and so on. So when I am making choices about maximizing X in the present, the amount of X I get is diminished in value, but it is proportionately diminished, so the decisions that I make remain the same. If there was a vast population in the past, the amount of X I can generate has very small value according to a bounded utility function. But that doesn’t matter because it’s all that I can do.
That way, even if X decreases in value the more of it there is, it will not effect any choices I make where I need to choose between different probabilities of getting different amounts of X in the future.
I suppose I could also solve it by making all of my preferences agent-relative instead of agent-neutral, but I would like to avoid that. Like most people I have a strong moral intuition that my altruistic preferences should be agent-neutral. I suppose it might also get me into conflict with other agents with bounded agent-relative utility functions if we value the same act differently.
If I am explaining this idea poorly, let me try directing you to some of the papers I am referencing. Besides the one I mentioned in the OP, there is this one by Beckstead and Thomas (pages 16, 17, and 18 are where it discusses it).
This whole idea seems to be utterly divorced from what utility means. Fundamentally, utility is based on an ordering of preferences over outcomes. It makes sense to say that you don’t know what the actual outcomes will be, that’s part of decision under risk. It even makes sense to say that you don’t know much about the distribution of outcomes, that’s decision under uncertainty.
The phrasing here seems to be a confused form of decision making under uncertainty. Instead of the agent saying “I don’t know what the distribution of outcomes will be”, it’s phrased as “I don’t know what my utility function is”.
I think things will be much clearer when phrased in terms of decision making under uncertainty: “I know what my utility function is, but I don’t know what the probability distribution of outcomes is”.
I think part of it is that I am conflating two different parts of the Egyptology problem. One part is uncertainty: it isn’t possible to know certain facts about the welfare of Ancient Egyptians that might affect how “close to the bound” you are. The other part is that most people have a strong intuition that those facts aren’t relevant to our decisions, whether we are certain of them or not. But there’s this argument that those facts are relevant if you have an altruistic bounded utility function because they affect how much diminishing returns your function has.
For example, I can imagine that if I was an altruistic immortal who was alive during ancient Egypt, I might be unwilling to trade a certainty of a good outcome in ancient Egypt for an uncertain amazingly terrific outcome in the far future because of my bounded utility function. That’s all good, it should help me avoid Pascal’s Mugging. But once I’ve lived until the present day, it feels like I should continue acting the same way I did in the past, continue to be altruistic, but in a bounded fashion. It doesn’t feel like I should conclude that, because of my achievements as an altruist in Ancient Egypt, that there is less value to being an altruist in the present day.
In the case of the immortal, I do have all the facts about Ancient Egypt, but they don’t seem relevant to what I am doing now. But in the past, in Egypt, I was unwilling to trade certain good outcomes for uncertain terrific ones because my bounded utility function meant I didn’t value the larger ones linearly. Now that the events of Egypt are in the past and can’t be changed, does that mean I value everything less? Does it matter if I do, if the decrease in value is proportionate? If I treat altruism in the present day as valuable, does that contradict the fact that I discounted that same value back in Ancient Egypt?
I think that’s why I’m phrasing it as being uncertain of what my utility function is. It feels like if I have a bounded utility function, I should be unwilling (within limits) to trade a sure thing for a small possibility of vast utility, thereby avoiding Pascal’s Mugging and similar problems. But it also feels like, once I have that sure thing, and the fact that I have it cannot be changed, I should be able to continue seeking more utility, and how many sure things I have accumulated in the past should not change that.
Yes, splitting the confounding factors out does help. There still seem to be a few misconceptions and confounding things though.
One is that bounded doesn’t mean small. On a scale where the welfare of the entire civilization of Ancient Egypt counts for 1 point of utility, the bound might still be more than 10^100.
Yes, this does imply that after 10^70 years of civilizations covering 10^30 planet-equivalents, the importance to the immortal of the welfare of one particular region of any randomly selected planet of those 10^30 might be less than that of Ancient Egypt. Even if they’re very altruistic.
Ok, thanks, I get that now, I appreciate your help. The thing I am really wondering is, does this make any difference at all to how that immortal would make decisions once Ancient Egypt is in the past and cannot be changed? Assuming that they have one of those bounded utility functions where their utility is asymptotic to the bound, but never actually reaches it, I don’t feel like it necessarily would.
If Ancient Egypt is in the past and can’t be changed, the immortal might, in some kind of abstract sense, value that randomly selected planet of those 10^30 worlds less than they valued Egypt. But if they are actually in a situation where they are on that random planet, and need to make altruistic decisions about helping the people on that planet, then their decisions shouldn’t really be affected. Even if the welfare of that planet is less valuable to them than the welfare of Ancient Egypt, that shouldn’t matter if their decisions don’t affect Ancient Egypt and only affect the planet. They would be trading less valuable welfare off against other less valuable welfare, so it would even out. Since their utility function is asymptotic to the bound, they would still act to increase their utility, even if the amount of utility they can generate is very small.
I am totally willing to accept the Egyptology argument if all it is saying is that past events that cannot be changed might affect the value of present-day events in some abstract sense (at least if you have a bounded utility function). Where I have trouble accepting it is if those same unchangeable past events might significantly affect what choices you have to make about future events that you can change. If future welfare is only 0.1x as valuable as past welfare, that doesn’t really matter, because future welfare is the only welfare you are able to affect. If it’s only possible to make a tiny difference, then you might as well try, because a tiny difference is better than no difference. The only time when the tininess seems relevant to decisions is Pascal’s Mugging type scenarios where one decision can generate tiny possibilities of huge utility.
Yes, the relative scale of future utility makes no difference in short-term decisions, though noting that short-term to an immortal here can still mean “in the next 10^50 years”!
It might make a difference in the case where someone who thought that they were immortal becomes uncertain of whether what they already experienced was real. That’s the sort of additional problem you get with uncertainty over risk though, not really a problem with bounded utility itself.
Hi, one other problem occurred to me in regards to short term decisions and bounded utility.
Suppose you are in a situation where you have a bounded utility function, plus a truly tremendous amount of utility. Maybe you’re an immortal altruist who has helped quadrillions of people, maybe you’re an immortal egoist who has lived an immensely long and happy life. You are very certain that all of that was real, and it is in the past and can’t be changed.
You then confront a Pascal’s Mugger who threatens to inflict a tremendous amount of disutility unless you give the $5. If you’re an altruist they threaten to torture quintillions of people, if you are an egoist they threaten to torture you for a quintillion years, something like that. As with standard Pascal’s mugging, the odds of them be able to carry this threat out are astronomically unlikely.
In this case, it still fells like you ought to ignore the mugger. Does that make sense considering that, even though your bounded utility function assigns less disvalue to such a threat, it also assigns less value to the $5 because you have so much utility already? Plus, if they are able to carry out their threat, they would be able to significantly lower your utility so that it is much “further away from the bound” than it was before. Does it matter that as they push your utility further and further “down” away from the bound, utility becomes “more valuable.”
Or am I completely misunderstanding how bounded utility is calculated? I’ve never seen this specific criticism of bounded utility functions before, and much smarter people than me have studied this issue, so I imagine that I must be? I am not sure exactly how adding utility and subtracting disutility is calculated. It seems like if the immortal altruist whose helped quadrillions of people has a choice between gaining 3 utilons, or inflicting 2 disutilons to gain 5 utilitons, that they should be indifferent between the two, even if they have a ton of utility and very little disutility in their past.
If a Pascal’s Mugger can credibly threaten an entire universe of people with indefinite torture, their promise to never carry out their threat for $5 is more credible than not, and you have good reason to believe that nothing else will work, then seriously we should just pay them. This is true regardless of whether utility is bounded or not.
All of these conditions are required, and all of them are stupid, which is why this answer defies intuition.
If there is no evidence that the mugger is more than an ordinarily powerful person, then the prior credence of their threat is incredibly low, because in this scenario the immortal has observed a universe with ~10^100 lives and none of them were able to do this thing before. What are the odds that this person, now, can do the thing they’re suggesting? I’d suggest lower than 10^-120. Certainly no more than 10^-100 credence on a randomly selected person in the universe would have this power (probably substantially less), and conditional on someone having such power, it’s very unlikely that they could provide no evidence for it.
But even in that tiny conditional, what is the probability that giving them $5 will actually stop them using it? They would have to be not only the universe’s most powerful person, but also one of the the universe’s most incompetent extortionists. What are the odds that the same person has both properties? Even lower still. It seems far more likely that giving them $5 will do nothing positive at all and may encourage them to do more extortion, eventually dooming the universe to hell when someone can’t or won’t pay. The net marginal utility of paying them may well be negative.
There are other actions that seem more likely to succeed, such as convincing them that with enormous power there are almost certainly things they could do for which people would voluntarily pay a great deal more than $5.
But really, the plausibility of this scenario is ridiculously, vastly low to the point where it’s not seriously worth dedicating a single neuron firing to it. The chances are vastly greater that the immortal is hallucinating the entire thing, or that in some other ways the encounter is completely different than it seems. In a lifespan of 10^70 years they have almost certainly encountered many such situations.
TLDR: What I really want to know is:
1. Is an agent with a bounded utility function justified (because of their bounded function) in rejecting any “Pascal’s Mugging” type scenario with tiny probabilities of vast utilities, regardless of how much utility or disutility they happen to “have” at the moment? Does everything just rescale so that the Mugging is an equally bad deal no matter what the relative scale of future utility is?
2. If you have a bounded utility function, are your choices going to be the same regardless of how much utility various unchangeable events in the past generated for you? Does everything just rescale when you gain or lose a lot of utility so that the relative value of everything is the same? I expect the answer is going to be “yes” based on our previous discussion, but am a little uncertain because of the various confused thoughts on the subject that I have been having lately.
Full length Comment:
I don’t think I explained my issue clearly. Those arguments about Pascal’s Mugging are addressing it from the perspective of its unlikeliness, rather than using a bounded utility function against it.
I am trying to understand bounded utility functions and I think I am still very confused. What I am confused about right now is how a bounded utility function protects from Pascal’s Mugging at different “points” along the function.
Imagine we have a bounded utility function that has a “S” curve shape. The function goes up and down from 0 and flattens as it approaches the upper and lower bounds.
If someone has utility at around 0, I see how they resist Pascal’s Mugging. Regardless of whether the Mugging is a threat or a reward, it approaches their upper or lower bound and then diminishes. So utility can never “outrace” probability.
But what if they have a level of utility that is close to the upper bound and a Mugger offers a horrible threat? If the Mugger offered a threat that would reduce their utility to 0, would they respond differently than they would to one that would send it all the way to the lower bound? Would the threat get worse as the utility being cancelled out by the disutility got further from the bound and closer to 0? Or is the idea that in order for a threat/reward to qualify as a Pascal’s Mugging it has to be so huge that it goes all the way down to a bound?
And if someone has a level of utility or disutility close to the bound, does that mean disutility matters more so they become a negative utilitarian close to the upper bound and a positive utilitarian close to the lower one? I don’t think that is the case, I think that, as you said, “the relative scale of future utility makes no difference in short-term decisions.” But I am confused about how.
I think I am probably just very confused in general about utility functions and about bounded utility functions. While some people have criticized bounded utility functions, I have never come across this specific type of criticism before. It seems far more likely that I am confused than that I am the first person to notice an obvious flaw.
Yes, I’m sorry about that. I don’t really think Pascal’s Mugging is a well-founded argument even with unbounded utilities, and that leaked through to ignore the main point of discussion which was bounded utilities. So back to that.
If your utility was unbounded below, and your assessment of their credibility is basically unchanged merely by the magnitude of their threat (past some point), then they can always find some threat such that you should pay $5 to avoid even that very tiny chance that paying them is the only thing that prevents it from happening. That’s the essence of Pascal’s Mugging.
The main “protection” of bounded utility is that at every point on the curve, the marginal utility of money is nonzero, and the threat of disutility is bounded. So there always exists some threshold credibility below which no threat (no matter how bad) makes expected utility positive for paying them.
Not necessarily. Any uniform scaling and shifting of a utility function makes no difference whatsoever to decisions. So no matter how close they are to a bound, there exists a scaling and shifting that means they make the same decisions in the future as they would have in the past. One continuous example of this is an exponential discounter, where the decisions are time-invariant but from a global view the space of potential future utility is exponentially shrinking.
That makes sense. What I am trying to figure out is, does that threshold credibility change depending on “where you are on the curve.” To illustrate this, imagine two altruistic agents, A and B, who have the same bounded utility function. A lives in a horrifying hell world full of misery. B lives in a happy utopia. So A is a lot “closer” to the lower bound than B. Both A and B are confronted by a Pascal’s Mugger who threatens them with an arbitrarily huge disutility.
Does the fact that agent B is “farther” from lower bound than agent A mean that the two agents have different credibility thresholds for rejecting the mugger? Because the amount of disutility that B needs to receive to get close to the lower bound is larger than the amount that A needs to receive? Or will their utility functions have the same credibility threshold because they have the same lower and upper bounds, regardless of “how much” utility or disutility they happen to “possess” at the moment? Again, I do not know if this is a coherent question or if it is born out of confusion about how utility functions work.
It seems to me that an agent with a bounded utility function shouldn’t need to do any research about the state of the rest of the universe before dismissing Pascal’s Mugging and other tiny probabilities of vast utilities as bad deals. That is why this question concerns me.
Thanks, that example made it a lot easier to get my head around the idea! I think understand it better now. This might not be technically accurate, but to me having a uniform rescaling and reshifting of utility that preserves future decisions like that doesn’t even feel like I am truly “valuing” future utility less. I know that in some sense I am, but it feels more like I am merely adjusting and recalibrating some technical details of my utility function in order to avoid “bugs” like Pascal’s Mugging. It feels similar to making sure that all my preferences are transitive to avoid money pumps, the goal is to have a functional decision theory, rather to to change my fundamental values.
Yes, I would expect that the thresholds would be different depending upon the base state of the universe.
In general though, this consideration is likely to be irrelevant. Most universes will be nowhere near the upper or lower bounds, and the chance of any individual’s decision being single-handedly responsible for doing a universe scale shifts toward a utility bound is so tiny that even estimating orders of magnitude of the unlikelihood is difficult. These are angels-on-head-of-pin quibbles.
The question of bounded utility can be thought of as “is there any possible scenario so bad (or good) that it cannot be made worse (or better) by any chosen factor no matter how large?”
If your utility function is unbounded, then the answer is no. For every bad or good scenario there exists a different scenario that is 10 times, 10^100 times, or 9^^^9 times worse or better.
My personal view is yes: there are scenarios so bad that a 99% chance of making it “good” is always worth a 1% chance of somehow making it worse. This is never true of someone with an unbounded utility function.
That makes sense. So it sounds like the Egyptology Objection is almost a form of Pascal’s Mugging in and of itself. If you are confronted by a Mugger (or some other, slightly less stupid scenario where there is a tiny probability of vast utility or disutility) the odds that you are at a “place” on the utility function that would affect the credibility threshold for the Mugger one way or another are just as astronomical as the odds that the Mugger is giving you. So an agent with a bounded utility function is never obligated to research how much utility the rest of the universe has before rejecting the mugger’s offer. They can just dismiss it as not credible and move on.
And Mugging-type scenarios are the only scenarios where this Egyptology stuff would really come up, because in normal situations with normal probabilities of normal amounts of (dis)utility, the rescaling and reshifting effect makes your “proximity to the bound” irrelevant to your behavior. That makes sense!
I also wanted to ask about something you said in an earlier comment:
I am not sure I understand exactly what you mean by that. How do simplified hypotheticals for testing specific principles make utility functions fail to approximate preference ordering? I have a lot of difficulty with this, where I worry that if I do not have the perfect answer to various simplified hypotheticals it means that I do not understand anything about anything. But I also understand that simplified hypotheticals often causes errors like removing important details and reifying concepts.
My main objection for the simplified utility functions is that they are presented as depending only upon the current external state of the world in some vaguely linear and stable way. Every adjective in there corresponds to discarding a lot of useful information about preferences that people actually have.
People often have strong preferences about potential pasts, presents, and future as well as the actual present. This includes not just things like how things are, but also about how things could have gone. I would be very dissatisfied if some judges had flipped coins to render a verdict, even if by chance every verdict was correct and the usual process would have delivered some incorrect verdicts.
People have rather strong preferences about their own internal states, not just about the external universe. For example, intransitive preferences are usually supposed to be pumpable, but this neglects the preference people have for not feeling ripped off and similar internal states. This also ties into the previous example where I would feel a justified loss of confidence in the judicial system which is unpleasant in itself, not just in its likelihood of affecting my life or those I care about in the future.
People have path-dependent preferences, not just preferences for some outcome state or other. For example, they may prefer a hypothetical universe in which some people were never born to one in which some people were born, lived, and then were murdered in secret. The final outcomes may be essentially identical, but can be very different in preference orderings.
People often have very strongly nonlinear preferences. Not just smoothly nonlinear, but outright discontinuous. They can also change over time for better or worse reasons, or for none at all.
Decision theories based on eliminating all these real phenomena seem very much less than useful.
The main argument I’ve heard for this kind of simplification is that your altruistic, morality-type preferences ought to be about the state of the external world because their subject is the wellbeing of other people, and the external world is where other people live. The linearity part is sort of an extension of the principle of treating people equally. I might be steelmanning it a little, a lot of times the argument is less that and more that having preferences that are in any way weird or complex is “arbitrary.” I think this is based on the mistaken notion that “arbitrary” is a synonym for “picky” or “complicated.”
I find this argument unpersuasive because altruism is also about respecting the preferences of others, and the preferences of others are, as you point out, extremely complicated and about all sorts of things other than the current state of the external world. I am also not sure that having nonlinear altruistic preferences is the same thing as not valuing people equally. And I think that our preferences about the welfare of others are often some of the most path-dependent preferences that we have.
EDIT: I have sense found this post, which discusses some similar arguments and refutes them more coherently than I do.
Second EDIT: I still find myself haunted by the “scary situation” I linked to and find myself wishing there was a way to tweak a utility function a little to avoid it, or at least get a better “exchange rate” than “double tiny good thing and more-than doubling horrible thing while keeping probability the same.” I suppose there must be a way since the article I linked to said it would not work on all bounded utility functions.
Thanks, again for your help :) That makes me feel a lot better. I have the twin difficulties of having severe OCD-related anxiety about weird decision theory problems, and being rather poor at the math required to understand them.
The case of the immortal who becomes uncertain of the reality of their experiences is I think what that “Pascal’s Mugging for Bounded Utilities” article I linked to the the OP was getting at. But it’s a relief to see that it’s just a subset of decisions under uncertainty, rather than a special weird problem.