A utility function shouldn’t suggest anything. It is simply an abstract mathematical function that is guaranteed to exist by the VNM utility theorem. If you’re letting an unintuitive mathematical theorem tell you to do things that you don’t want to do, then something is wrong.
Again, the problem is there is a namespace collision between the utility function guaranteed by VNM, which we are maximizing the expected value of, and the utility function that we intuitively associate with our preferences, which we (probably) aren’t maximizing the expected value of. VNM just says that if you have consistent preferences, then there is some function whose expected value you are maximizing. It doesn’t say that this function has anything to do with the degree to which you want various things to happen.
I seem to be having a lot of trouble getting this point across, so let me try to put it another way: Ignore Kolmogorov complexity, priors, etc. for a moment, and if you can, forget about your utility function and just ask yourself what you would want. Now imagine the worst possible thing that could happen (you can even suppose that both time and space are potentially infinite, so infinitely many people being tortured for infinite extents of time is fine). Let us call this thing X. Suppose that you have somehow calculated that, with probability 10^(-100), the mugger will cause X to happen if you don’t pay him $5. Would you pay him? If you would pay him, then why?
I am actually quite interested in the answer to this question, because I am having trouble diagnosing the precise source of my disagreement on this issue. And even though I said to forget about utility functions, if you really think that is the answer to the “why” question, feel free to use them in your argument. As I said, at this point I am most interested in determining why we disagree, because previous discussions with other people suggest that there is some hidden inferential distance afoot.
As an aside, if you wouldn’t pay him then the definition of utility implies that u($5) > 10^(-100) u(X), which implies that u(X), and therefore the entire utility function, is bounded.
Now imagine the worst possible thing that could happen
As was pointed out in the other subthread, you are assuming the conclusion you wish to prove here, viz. that the utility function is (necessarily) bounded.
Fine, I was slightly sloppy in my original proof (not only in the way you pointed out, but also in keeping track of signs). Here is a rigorous version:
Suppose that there is nothing so bad that you would pay $5 to stop it from happening with probability 10^(-100). Let X be a state of the universe. Then u(-$5) < 10^(-100) u(X), so u(X) > 10^(100) u(-$5). Since u(X) > 10^(100) u(-$5) for all X, u is bounded below.
Similarly, suppose that there is nothing so good that you would pay $5 to have a 10^(-100) chance of it happening. Then u($5) > 10^(100) u(X) for all X, so u(X) < 10^(100) u($5), hence u is also bounded above.
Now I’ve given proofs that u is bounded both above and below, without looking at argmax u or argmin u (which incidentally probably don’t exist even if u is bounded; it is much more likely that u asymptotes out).
My proof is still not entirely rigorous, for instance u(-$5) and u($5) will in general depend on my current level of income / savings. If you really want me to, I can write everything out completely rigorously, but I’ve been trying to avoid it because I find that diving into unnecessary levels of rigor only obscures the underlying intuition (and I say this as someone who studies math).
Your question has two possible meanings to me, so I’ll try to answer both.
Meaning 1: Why is this a reasonable assumption in the context of the current debate?
Answer: Because if there was something that bad, then you get Pascal’s mugged in my hypothetical situation. What I have shown is that either you would give Pascal $5 in that scenario, or your utility function is bounded.
Meaning 2: Why is this a reasonable assumption in general?
Answer: Because things that occur with probability 10^(-100) don’t actually happen. Actually, 10^(-100) might be a bit high, but certainly things that occur with probability 10^(-10^(100)) don’t actually happen.
Because if there was something that bad, then you get Pascal’s mugged in my hypothetical situation
You seem not to have understood the post. The worse something is, the more difficult it is for the mugger to make the threat credible. There may be things that are so bad that I (or my hypothetical AI) would pay $5 not to raise their probability to 10^(-100), but such things have prior probabilities that are lower than 10^(-100), and a mugger uttering the threat will not be sufficient evidence to raise the probability to 10^(-100).
Answer: Because things that occur with probability 10^(-100) don’t actually happen. Actually, 10^(-100) might be a bit high, but certainly things that occur with probability 10^(-10^(100)) don’t actually happen.
We don’t need to declare 10^(-100) equal to 0. 10^(-100) is small enough already.
I have to admit that I did find the original post somewhat confusing. However, let me try to make sure that I understood it. I would summarize your idea as saying that we should have u(X) = O(1/p(X)), where u is the utility function and p is our posterior estimate of X. Is that correct? Or do you want p to be the prior estimate? Or am I completely wrong?
Yes, p should be the prior estimate. The point being that the posterior estimate is not too different from the prior estimate in the “typical” mugging scenario (i.e. someone says “give me $5 or I’ll create 3^^^^3 units of disutility” without specifying how in enough detail).
So, backing up, let me put forth my biggest objections to your idea, as I see it. I will try to stick to only arguing about this point until we can reach a consensus.
I do not believe there is anything so bad that you would trade $5 to prevent it from happening with probability 10^(-500). If there is, please let me know. If not, then this is a statement that is independent of your original priors, and which implies (as noted before) that your utility function is bounded.
I concede that the condition u(X) = O(1/p(X)) implies that one would be immune to the classical version of the Pascal’s mugging problem. What I am trying to say now is that it fails to be immune to other variants of Pascal’s mugging that would still be undesirable. While a good decision theory should certainly be immune to [the classical] Pascal’s mugging, a failure to be immune to other mugging variants still raises issues.
My claim (which I supported with math above) is that the only way to be immune to all variants of Pascal’s mugging is to have a bounded utility function.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function. But let’s avoid arguing about this point until we’ve resolved all of the issues in the preceding paragraphs.
I’m a little suspicious talking about “the utility function” of a human being. We are messy biological creatures whose behavior is determined, most directly, by electrochemical stuff and not economic stuff. Our preferences are not consistent from minute-to-minute, and there is a lot of inconsistency between our stated and revealed preferences. We are very bad at computing probabilities. And so on. It’s better to speak of a given utility function approximating the preferences of a given human being. I think we can (we have to) leave this notion vague and still make progress.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function.
I think that this is plausible. In the vaguer language of 0., we could wonder if “any utility function that approximates the preferences of a human being is bounded.” The partner of this claim, that events with probability 10^(-500) can’t happen, is also plausible. For instance, they would both follow from any kind of ultrafinitism. But however plausible we find it, none of us yet know whether it’s the case, so it’s valuable to consider alternatives.
Write X for a terrible thing (if you prefer the philanthropy version, wonderful thing) that has probability 10^(-500). To pay 5$ to prevent X means by revealed preference that |U(X)| > 5*10^(500). Part of Komponisto’s proposal is that, for a certain kind of utility function, this would imply that X is very complicated—too complicated for him to write down. So he couldn’t prove to you (not in this medium!) that so-and-so’s utility function can take values this high by describing an example of something that terrible. It doesn’t follow that U(X) is always small—especially not if we remain agnostic about ultrafinitism.
Okay, thanks. So it is the prior, not the posterior, which makes more sense (as the posterior will be in general changing while the utility function remains constant).
My objection to this is that, even though you do deal with the “typical” mugging scenario, you run into issues in other scenarios. For instance, suppose that your prior for X is 10^(-1000), and your utility for X is 10^750, which I believe fits your requirements. Now suppose that I manage to argue your posterior up to 10^(-500). Either you can get mugged (for huge amounts of money) in this circumstance, or your utility on X is actually smaller than 10^(500).
Getting “mugged” in such a scenario doesn’t seem particularly objectionable when you consider the amount of work involved in raising the probability by a factor of 10^(500).
I don’t see how this is relevant. It doesn’t change the fact that you wouldn’t actually be willing [I don’t think?] to make such a trade.
The mugger also doesn’t have to do all the work of raising your probability by a factor of 10^(500), the universe can do most (or all) of it. Remember, your priors are fixed once and for all at the beginning of time.
In the grand scheme of things, 10^(500) isn’t all that much. It’s just 1661 bits.
you wouldn’t actually be willing [I don’t think?] to make such a trade.
Why shouldn’t I be? A 10^(-500) chance of utility 10^(750) yields an expected utility of 10^(250). This sounds like a pretty good deal to me, especially when you consider that “expected utility” is the technical term for “how good the deal is”.
(I’ll note at this point that we’re no longer discussing Pascal’s mugging, which is a problem in epistemology, about how we know the probability of the mugger’s threat is so low; instead, we’re discussing ordinary expected utility maximization.)
The mugger also doesn’t have to do all the work of raising your probability by a factor of 10^(500), the universe can do most (or all) of it. Remember, your priors are fixed once and for all at the beginning of time.
You postulated that my prior was 10^(-1000), and that the mugger raised it to 10^(-500). If other forces in the universe cooperated with the mugger to accomplish this, I don’t see how that changes the decision problem.
In the grand scheme of things, 10^(500) isn’t all that much. It’s just 1661 bits.
In which case, we can also say that a posterior probability of 10^(-500) is “just” 1661 bits away from even odds.
“expected utility” is the technical term for “how good the deal is”.
I know what the definition of utility is. My claim is that there does not exist any event such that you would care about it happening with probability 10^(-500) enough to pay $5.
You postulated that my prior was 10^(-1000), and that the mugger raised it to 10^(-500). If other forces in the universe cooperated with the mugger to accomplish this, I don’t see how that changes the decision problem.
You said that you would be okay with losing $5 to a mugger who raised your posterior by a factor of 10^(500), because they would have to do a lot of work to do so. I responded by pointing out that they wouldn’t have to do much work after all. If this doesn’t change the decision problem (which I agree with) then I don’t see how your original reasoning that it’s okay to get mugged because the mugger would have to work hard to mug you makes any sense.
At the very least, I consider making contradictory [and in the first case, rather flippant] responses to my comments to be somewhat logically rude, although I understand that you are the OP on this thread, and thus have to reply to many people’s comments and might not remember what you’ve said to me.
I believe that this entire back-and-forth is derailing the discussion, so I’m going to back up a few levels and try to start over.
In which case, we can also say that a posterior probability of 10^(-500) is “just” 1661 bits away from even odds.
You said that you would be okay with losing $5 to a mugger who raised your posterior by a factor of 10^(500), because they would have to do a lot of work to do so. I responded by pointing out that they wouldn’t have to do much work after all. If this doesn’t change the decision problem (which I agree with) then I don’t see how your original reasoning that it’s okay to get mugged because the mugger would have to work hard to mug you makes any sense.
What determines how much I am willing to pay is not how hard the mugger works per se, but how credible the threat is compared to its severity. (I thought this went without saying, and that you would be able to automatically generalize from “the mugger working hard” to “the mugger’s credibility increasing by whatever means”.) Going from p = 10^(-1000) to p = 10^(-500) may not sound like a “huge” increase in credibility, but it is. Or at least, if you insist that it isn’t, then you also have to concede that going from p = 10^(-500) to p = 1⁄2 isn’t that big of a credibility increase either, because it’s the same number of bits. In fact, measured in bits, going from p = 10^(-1000) to p = 10^(-500) is one-third of the way to p = 1-10^(-500) !
Now I presume you understand this arithmetic, so I agree that this is a distraction. In the same way, I think the simple mathematical arguments that you have been presenting are also a distraction. The real issue is that you apparently don’t believe that there exist outcomes with utilities in the range of 10^(750). Well, I am undecided on that question, because at this point I don’t know what “my” values look like in the limit of superintelligent extrapolation on galactic scales. (I like to think I’m pretty good at introspection, but I’m not that good!) But there’s no way I’m going to be convinced that my utility function has necessarily to be bounded without some serious argument going significantly beyond the fact that the consequences of an unbounded utility function seem counterintuitive to another human whose style of thought has already been demonstrated to be different from my own.
If you’ve got serious, novel arguments to offer for why a human-extracted utility function must be bounded, I’m quite willing to consider them, of course. But as of now I don’t have much evidence that you do have such arguments, because as far as I can tell, all you’ve said so far is “I can’t imagine anything with such high utility!”
P.S. Given that we’ve apparently had protracted disagreements on two issues so far, I just wanted you to know that I’m not trying to troll you or anything (in fact, I hadn’t realized that you were the same person who had made the Amanda Knox post). I will try to keep in mind in the future that our thinking styles are different and that appeals to intuition will probably just result in frustration.
As an aside, if you wouldn’t pay him then the definition of utility implies that u($5) > 10^(-100) u(X), which implies that u(X), and therefore the entire utility function, is bounded.
This doesn’t actually imply that the entire utility function is bounded. It is still possible that u(Y) is infinite, where Y is something that is valued positively.
As an aside we can now consider the possibility of Pascal’s Samaritan.
Assume a utility function such that u(Y) is infinite (and neutral with respect to risk). Further assume that you predict that $5 would increase your chance of achieving Y by 1/3^^^3. A Pascal Samaritan can offer to pay you $5 for the opportunity to give you a 90% chance of sending the entire universe into the hell state X. Do you take the $5?
From my reply to komponisto (incidentally, both you and he seem to be making the same objections in parallel, which suggests that I’m not doing a very good job of explaining myself, sorry):
Suppose that there is nothing so bad that you would pay $5 to stop it from happening with probability 10^(-100). Let X be a state of the universe. Then u(-$5) < 10^(-100) u(X), so u(X) > 10^(100) u(-$5). Since u(X) > 10^(100) u(-$5) for all X, u is bounded below.
Similarly, suppose that there is nothing so good that you would pay $5 to have a 10^(-100) chance of it happening. Then u($5) > 10^(100) u(X) for all X, so u(X) < 10^(100) u($5), hence u is also bounded above.
As I said, at this point I am most interested in determining why we disagree,
The meaning of a phrase, primarily. And slightly about the proper use of an abstract concept.
A utility function should be a representation of my values. If my values are such that paying a mugger is the best option then I am glad to pay a mugger.
Suppose that you have somehow calculated that, with probability 10^(-100), the mugger will cause X to happen if you don’t pay him $5. Would you pay him? If you would pay him, then why?
If I were to pay him it would be because I happen to value not having a 10^(-100) chance of X happening more than I value $5.
As an aside, if you wouldn’t pay him then the definition of utility implies that u($5) > 10^(-100) u(X), which implies that u(X), and therefore the entire utility function, is bounded.
My utility function quite likely is bounded. Not because that is a way around pascal’s mugging. Simply because that happens to be what the arbitrary value system represented by this particular bunch of atoms happens to be.
Hm...it sounds like we agree on far more than I thought, then.
What I am saying is that my utility function is bounded because it would be ridiculous to be Pascal’s mugged, even in the hypothetical universe I created that disobeys komponisto’s priors. Put another way, I am simply not willing to seriously consider events at probabilities of, say, 10^(-10^(100)), because such events don’t happen. For this same reason, I have a hard time taking anyone seriously who claims to have an unbounded utility function, because they would then care about events that can’t happen in a sense at least as strong as the sense that 1 is not equal to 2.
Would you object to anything in the above paragraph? Thanks for bearing with me on this, by the way.
P.S. Am I the only one who is always tempted to write “mugged by Pascal” before realizing that this is comically different from being “Pascal’s mugged”?
Put another way, I am simply not willing to seriously consider events at probabilities of, say, 10^(-10^(100)), because such events don’t happen.
As far as I know they do happen. To know that such a number cannot represent an altogether esoteric feature of the universe that can nevertheless be the legitimate subject of infinite value I would need to know the smallest number that can be assigned to a quantum state.
(This objection is purely tangential. See below for significant disagreement.)
I have a hard time taking anyone seriously who claims to have an unbounded utility function, because they would then care about events that can’t happen in a sense at least as strong as the sense that 1 is not equal to 2.
That isn’t true. Someone can assign infinite utility to Australia winning the ashes if that is what they really want. I’d think them rather silly but that is just my subjective evaluation, nothing to do with maths.
To know that such a number cannot represent an altogether esoteric feature of the universe that can nevertheless be the legitimate subject of infinite value I would need to know the smallest number that can be assigned to a quantum state.
I think you are conflating quantum probabilities with Bayesian probabilities here, but I’m not sure. Unless you think this point is worth discussing further I’ll move on to your more significant disagreement.
Someone can assign infinite utility to Australia winning the ashes if that is what they really want. I’d think them rather silly but that is just my subjective evaluation, nothing to do with maths.
Hm...I initially wrote a two-paragraph explanation of why you were wrong, then deleted it because I changed my mind. So, I think we are making progress!
I initially thought I accorded disdain to unbounded utility functions for the same reason that I accorded disdain to ridiculous priors. But the difference is that your priors affect your epistemic state, and in the case of beliefs there is only one right answer. On the other hand, there is nothing inherently wrong with being a paperclip maximizer.
I think the actual issue I’m having is that I suspect that most people who claim to have unbounded utility functions would have been unwilling to make the trades implied by this before reading about VNM utility / “Shut up and multiply”. So my objection is not that unbounded utility functions are inherently wrong, but that they cannot possibly reflect the preferences of a human.
I think the actual issue I’m having is that I suspect that most people who claim to have unbounded utility functions would have been unwilling to make the trades implied by this before reading about VNM utility / “Shut up and multiply”. So my objection is not that unbounded utility functions are inherently wrong, but that they cannot possibly reflect the preferences of a human.
A utility function shouldn’t suggest anything. It is simply an abstract mathematical function that is guaranteed to exist by the VNM utility theorem. If you’re letting an unintuitive mathematical theorem tell you to do things that you don’t want to do, then something is wrong.
Again, the problem is there is a namespace collision between the utility function guaranteed by VNM, which we are maximizing the expected value of, and the utility function that we intuitively associate with our preferences, which we (probably) aren’t maximizing the expected value of. VNM just says that if you have consistent preferences, then there is some function whose expected value you are maximizing. It doesn’t say that this function has anything to do with the degree to which you want various things to happen.
I seem to be having a lot of trouble getting this point across, so let me try to put it another way: Ignore Kolmogorov complexity, priors, etc. for a moment, and if you can, forget about your utility function and just ask yourself what you would want. Now imagine the worst possible thing that could happen (you can even suppose that both time and space are potentially infinite, so infinitely many people being tortured for infinite extents of time is fine). Let us call this thing X. Suppose that you have somehow calculated that, with probability 10^(-100), the mugger will cause X to happen if you don’t pay him $5. Would you pay him? If you would pay him, then why?
I am actually quite interested in the answer to this question, because I am having trouble diagnosing the precise source of my disagreement on this issue. And even though I said to forget about utility functions, if you really think that is the answer to the “why” question, feel free to use them in your argument. As I said, at this point I am most interested in determining why we disagree, because previous discussions with other people suggest that there is some hidden inferential distance afoot.
As an aside, if you wouldn’t pay him then the definition of utility implies that u($5) > 10^(-100) u(X), which implies that u(X), and therefore the entire utility function, is bounded.
As was pointed out in the other subthread, you are assuming the conclusion you wish to prove here, viz. that the utility function is (necessarily) bounded.
Fine, I was slightly sloppy in my original proof (not only in the way you pointed out, but also in keeping track of signs). Here is a rigorous version:
Suppose that there is nothing so bad that you would pay $5 to stop it from happening with probability 10^(-100). Let X be a state of the universe. Then u(-$5) < 10^(-100) u(X), so u(X) > 10^(100) u(-$5). Since u(X) > 10^(100) u(-$5) for all X, u is bounded below.
Similarly, suppose that there is nothing so good that you would pay $5 to have a 10^(-100) chance of it happening. Then u($5) > 10^(100) u(X) for all X, so u(X) < 10^(100) u($5), hence u is also bounded above.
Now I’ve given proofs that u is bounded both above and below, without looking at argmax u or argmin u (which incidentally probably don’t exist even if u is bounded; it is much more likely that u asymptotes out).
My proof is still not entirely rigorous, for instance u(-$5) and u($5) will in general depend on my current level of income / savings. If you really want me to, I can write everything out completely rigorously, but I’ve been trying to avoid it because I find that diving into unnecessary levels of rigor only obscures the underlying intuition (and I say this as someone who studies math).
Again, why assume this?
Your question has two possible meanings to me, so I’ll try to answer both.
Meaning 1: Why is this a reasonable assumption in the context of the current debate?
Answer: Because if there was something that bad, then you get Pascal’s mugged in my hypothetical situation. What I have shown is that either you would give Pascal $5 in that scenario, or your utility function is bounded.
Meaning 2: Why is this a reasonable assumption in general?
Answer: Because things that occur with probability 10^(-100) don’t actually happen. Actually, 10^(-100) might be a bit high, but certainly things that occur with probability 10^(-10^(100)) don’t actually happen.
You seem not to have understood the post. The worse something is, the more difficult it is for the mugger to make the threat credible. There may be things that are so bad that I (or my hypothetical AI) would pay $5 not to raise their probability to 10^(-100), but such things have prior probabilities that are lower than 10^(-100), and a mugger uttering the threat will not be sufficient evidence to raise the probability to 10^(-100).
We don’t need to declare 10^(-100) equal to 0. 10^(-100) is small enough already.
I have to admit that I did find the original post somewhat confusing. However, let me try to make sure that I understood it. I would summarize your idea as saying that we should have u(X) = O(1/p(X)), where u is the utility function and p is our posterior estimate of X. Is that correct? Or do you want p to be the prior estimate? Or am I completely wrong?
Yes, p should be the prior estimate. The point being that the posterior estimate is not too different from the prior estimate in the “typical” mugging scenario (i.e. someone says “give me $5 or I’ll create 3^^^^3 units of disutility” without specifying how in enough detail).
So, backing up, let me put forth my biggest objections to your idea, as I see it. I will try to stick to only arguing about this point until we can reach a consensus.
I do not believe there is anything so bad that you would trade $5 to prevent it from happening with probability 10^(-500). If there is, please let me know. If not, then this is a statement that is independent of your original priors, and which implies (as noted before) that your utility function is bounded.
I concede that the condition u(X) = O(1/p(X)) implies that one would be immune to the classical version of the Pascal’s mugging problem. What I am trying to say now is that it fails to be immune to other variants of Pascal’s mugging that would still be undesirable. While a good decision theory should certainly be immune to [the classical] Pascal’s mugging, a failure to be immune to other mugging variants still raises issues.
My claim (which I supported with math above) is that the only way to be immune to all variants of Pascal’s mugging is to have a bounded utility function.
My stronger claim, in case you agree with all of the above but think it is irrelevant, is that all humans have a bounded utility function. But let’s avoid arguing about this point until we’ve resolved all of the issues in the preceding paragraphs.
I’m a little suspicious talking about “the utility function” of a human being. We are messy biological creatures whose behavior is determined, most directly, by electrochemical stuff and not economic stuff. Our preferences are not consistent from minute-to-minute, and there is a lot of inconsistency between our stated and revealed preferences. We are very bad at computing probabilities. And so on. It’s better to speak of a given utility function approximating the preferences of a given human being. I think we can (we have to) leave this notion vague and still make progress.
I think that this is plausible. In the vaguer language of 0., we could wonder if “any utility function that approximates the preferences of a human being is bounded.” The partner of this claim, that events with probability 10^(-500) can’t happen, is also plausible. For instance, they would both follow from any kind of ultrafinitism. But however plausible we find it, none of us yet know whether it’s the case, so it’s valuable to consider alternatives.
Write X for a terrible thing (if you prefer the philanthropy version, wonderful thing) that has probability 10^(-500). To pay 5$ to prevent X means by revealed preference that |U(X)| > 5*10^(500). Part of Komponisto’s proposal is that, for a certain kind of utility function, this would imply that X is very complicated—too complicated for him to write down. So he couldn’t prove to you (not in this medium!) that so-and-so’s utility function can take values this high by describing an example of something that terrible. It doesn’t follow that U(X) is always small—especially not if we remain agnostic about ultrafinitism.
Okay, thanks. So it is the prior, not the posterior, which makes more sense (as the posterior will be in general changing while the utility function remains constant).
My objection to this is that, even though you do deal with the “typical” mugging scenario, you run into issues in other scenarios. For instance, suppose that your prior for X is 10^(-1000), and your utility for X is 10^750, which I believe fits your requirements. Now suppose that I manage to argue your posterior up to 10^(-500). Either you can get mugged (for huge amounts of money) in this circumstance, or your utility on X is actually smaller than 10^(500).
Getting “mugged” in such a scenario doesn’t seem particularly objectionable when you consider the amount of work involved in raising the probability by a factor of 10^(500).
It would be money well earned, it seems to me.
I don’t see how this is relevant. It doesn’t change the fact that you wouldn’t actually be willing [I don’t think?] to make such a trade.
The mugger also doesn’t have to do all the work of raising your probability by a factor of 10^(500), the universe can do most (or all) of it. Remember, your priors are fixed once and for all at the beginning of time.
In the grand scheme of things, 10^(500) isn’t all that much. It’s just 1661 bits.
Why shouldn’t I be? A 10^(-500) chance of utility 10^(750) yields an expected utility of 10^(250). This sounds like a pretty good deal to me, especially when you consider that “expected utility” is the technical term for “how good the deal is”.
(I’ll note at this point that we’re no longer discussing Pascal’s mugging, which is a problem in epistemology, about how we know the probability of the mugger’s threat is so low; instead, we’re discussing ordinary expected utility maximization.)
You postulated that my prior was 10^(-1000), and that the mugger raised it to 10^(-500). If other forces in the universe cooperated with the mugger to accomplish this, I don’t see how that changes the decision problem.
In which case, we can also say that a posterior probability of 10^(-500) is “just” 1661 bits away from even odds.
I know what the definition of utility is. My claim is that there does not exist any event such that you would care about it happening with probability 10^(-500) enough to pay $5.
You said that you would be okay with losing $5 to a mugger who raised your posterior by a factor of 10^(500), because they would have to do a lot of work to do so. I responded by pointing out that they wouldn’t have to do much work after all. If this doesn’t change the decision problem (which I agree with) then I don’t see how your original reasoning that it’s okay to get mugged because the mugger would have to work hard to mug you makes any sense.
At the very least, I consider making contradictory [and in the first case, rather flippant] responses to my comments to be somewhat logically rude, although I understand that you are the OP on this thread, and thus have to reply to many people’s comments and might not remember what you’ve said to me.
I believe that this entire back-and-forth is derailing the discussion, so I’m going to back up a few levels and try to start over.
Granted.
What determines how much I am willing to pay is not how hard the mugger works per se, but how credible the threat is compared to its severity. (I thought this went without saying, and that you would be able to automatically generalize from “the mugger working hard” to “the mugger’s credibility increasing by whatever means”.) Going from p = 10^(-1000) to p = 10^(-500) may not sound like a “huge” increase in credibility, but it is. Or at least, if you insist that it isn’t, then you also have to concede that going from p = 10^(-500) to p = 1⁄2 isn’t that big of a credibility increase either, because it’s the same number of bits. In fact, measured in bits, going from p = 10^(-1000) to p = 10^(-500) is one-third of the way to p = 1-10^(-500) !
Now I presume you understand this arithmetic, so I agree that this is a distraction. In the same way, I think the simple mathematical arguments that you have been presenting are also a distraction. The real issue is that you apparently don’t believe that there exist outcomes with utilities in the range of 10^(750). Well, I am undecided on that question, because at this point I don’t know what “my” values look like in the limit of superintelligent extrapolation on galactic scales. (I like to think I’m pretty good at introspection, but I’m not that good!) But there’s no way I’m going to be convinced that my utility function has necessarily to be bounded without some serious argument going significantly beyond the fact that the consequences of an unbounded utility function seem counterintuitive to another human whose style of thought has already been demonstrated to be different from my own.
If you’ve got serious, novel arguments to offer for why a human-extracted utility function must be bounded, I’m quite willing to consider them, of course. But as of now I don’t have much evidence that you do have such arguments, because as far as I can tell, all you’ve said so far is “I can’t imagine anything with such high utility!”
Fair enough.
P.S. Given that we’ve apparently had protracted disagreements on two issues so far, I just wanted you to know that I’m not trying to troll you or anything (in fact, I hadn’t realized that you were the same person who had made the Amanda Knox post). I will try to keep in mind in the future that our thinking styles are different and that appeals to intuition will probably just result in frustration.
This doesn’t actually imply that the entire utility function is bounded. It is still possible that u(Y) is infinite, where Y is something that is valued positively.
As an aside we can now consider the possibility of Pascal’s Samaritan.
Assume a utility function such that u(Y) is infinite (and neutral with respect to risk). Further assume that you predict that $5 would increase your chance of achieving Y by 1/3^^^3. A Pascal Samaritan can offer to pay you $5 for the opportunity to give you a 90% chance of sending the entire universe into the hell state X. Do you take the $5?
From my reply to komponisto (incidentally, both you and he seem to be making the same objections in parallel, which suggests that I’m not doing a very good job of explaining myself, sorry):
The meaning of a phrase, primarily. And slightly about the proper use of an abstract concept.
A utility function should be a representation of my values. If my values are such that paying a mugger is the best option then I am glad to pay a mugger.
If I were to pay him it would be because I happen to value not having a 10^(-100) chance of X happening more than I value $5.
My utility function quite likely is bounded. Not because that is a way around pascal’s mugging. Simply because that happens to be what the arbitrary value system represented by this particular bunch of atoms happens to be.
Hm...it sounds like we agree on far more than I thought, then.
What I am saying is that my utility function is bounded because it would be ridiculous to be Pascal’s mugged, even in the hypothetical universe I created that disobeys komponisto’s priors. Put another way, I am simply not willing to seriously consider events at probabilities of, say, 10^(-10^(100)), because such events don’t happen. For this same reason, I have a hard time taking anyone seriously who claims to have an unbounded utility function, because they would then care about events that can’t happen in a sense at least as strong as the sense that 1 is not equal to 2.
Would you object to anything in the above paragraph? Thanks for bearing with me on this, by the way.
P.S. Am I the only one who is always tempted to write “mugged by Pascal” before realizing that this is comically different from being “Pascal’s mugged”?
As far as I know they do happen. To know that such a number cannot represent an altogether esoteric feature of the universe that can nevertheless be the legitimate subject of infinite value I would need to know the smallest number that can be assigned to a quantum state.
(This objection is purely tangential. See below for significant disagreement.)
That isn’t true. Someone can assign infinite utility to Australia winning the ashes if that is what they really want. I’d think them rather silly but that is just my subjective evaluation, nothing to do with maths.
I think you are conflating quantum probabilities with Bayesian probabilities here, but I’m not sure. Unless you think this point is worth discussing further I’ll move on to your more significant disagreement.
Hm...I initially wrote a two-paragraph explanation of why you were wrong, then deleted it because I changed my mind. So, I think we are making progress!
I initially thought I accorded disdain to unbounded utility functions for the same reason that I accorded disdain to ridiculous priors. But the difference is that your priors affect your epistemic state, and in the case of beliefs there is only one right answer. On the other hand, there is nothing inherently wrong with being a paperclip maximizer.
I think the actual issue I’m having is that I suspect that most people who claim to have unbounded utility functions would have been unwilling to make the trades implied by this before reading about VNM utility / “Shut up and multiply”. So my objection is not that unbounded utility functions are inherently wrong, but that they cannot possibly reflect the preferences of a human.
On this I believe we approximately agree.