Trust in Bayes
Followup to: Beautiful Probability, Trust in Math
In Trust in Math, I presented an algebraic proof that 1 = 2, which turned out to be—surprise surprise—flawed. Trusting that algebra, correctly used, will not carry you to an absurd result, is not a matter of blind faith. When we see apparent evidence against algebra’s trustworthiness, we should also take into account the massive evidence favoring algebra which we have previously encountered. We should take into account our past experience of seeming contradictions which turned out to be themselves flawed. Based on our inductive faith that we may likely have a similar experience in the future, we look for a flaw in the contrary evidence.
This seems like a dangerous way to think, and it is dangerous, as I noted in “Trust in Math”. But, faced with a proof that 2 = 1, I can’t convince myself that it’s genuinely reasonable to think any other way.
The novice goes astray and says, “The Art failed me.”
The master goes astray and says, “I failed my Art.”
To get yourself to stop saying “The Art failed me”, it’s helpful to know the history of people crying wolf on Bayesian math—to be familiar with seeming paradoxes that have been discovered and refuted. Here an invaluable resource is “Paradoxes of Probability Theory”, Chapter 15 of E. T. Jaynes’s Probability Theory: The Logic of Science (available online).
I’ll illustrate with one of Jaynes’s examples:
If you have a conditional probability distribution P(X|C), the unconditional probability P(X) should be a weighted average of the various P(X|C), and therefore intermediate between the various P(X|C) in value - somewhere between the minimum and the maximum P(X|C). That is: If you flip a coin before rolling a die, and the die is a four-sided die if the coin comes up heads, or ten-sided if the coin comes up tails, then (even without doing an exact calculation) you know that the compound probability of rolling a “1” occupies the range [0.1, 0.25].
Now suppose a two-dimensional array, M cells wide and N cells tall, with positions written (i, j) with i as the horizontal coordinate and j as the vertical coordinate. And suppose a uniform probability distribution over the array: p(i, j) = 1/MN for all i, j. Finally, let X be the event that i < j. We’ll be asking about P(X).
If we think about just the top row—that is, condition on the information j=N—then the probability of X is p(i < N) = (N − 1)/M, or 1 if N > M.
If we think about just the bottom row—condition on the information j=1 - then the probability of X is p(i < 1) = 0.
Similarly, if we think about just the rightmost column, condition on i=M, then the probability of X is p(j > M) = (N—M)/N, or 0 if M > N.
And thinking about the leftmost column, conditioning on i=1, the probability of X is p(j > 1) = (N − 1)/N.
So for the whole array, the probability of X must be between (N − 1)/M and 0 (by reasoning about rows) or between (N − 1)/N and (N—M)/N (by reasoning about columns).
This is actually correct, so no paradox so far. If the array is 5 x 7, then the probability of X on the top row is 1, the probability of X on the bottom row is 0. The probability of X in the rightmost row is 2⁄7, and the probability of X in the leftmost row is 6⁄7. The probability of X over the whole array is 4⁄7, which obeys both constraints.
But now suppose that the array is infinite. Reasoning about the rows, we see that, for every row, there is a finite number of points where i < j, and an infinite number of points where i >= j. So for every row, the probability of the event X must be 0. Reasoning about the columns, we see in every column a finite number of points where j ⇐ i, and an infinite number of points where i < j. So for every column, the probability of the event X must be 1. This is a paradox, since the compound probability of X must be both a weighted mix of the probability for each row, and a weighted mix of the probability for each column.
If to you this seems like a perfectly reasonable paradox, then you really need to read Jaynes’s “Paradoxes of Probability Theory” all the way through.
In “paradoxes” of algebra, there is always an illegal operation that produces the contradiction. For algebraic paradoxes, the illegal operation is usually a disguised division by zero. For “paradoxes” of probability theory and decision theory, the illegal operation is usually assuming an infinity that has not been obtained as the limit of a finite calculation.
In the case above, the limiting probability of i < j approaches the ratio M / N, so just assuming that M and N are “infinite” will naturally produce all sorts of paradoxes in the ratio—it depends on how M and N approach infinity.
It’s all too tempting to just talk about infinities, instead of constructing them. As Jaynes observes, this is a particularly pernicious habit because it may work 95% of the time and then lead you into trouble on the last 5% of occasions—like how the really deadly bugs in a computer program are not those that appear all the time, but those that only appear 1% of the time.
Apparently there was a whole cottage industry in this kind of paradox, where, assuming infinite sets, the marginal probability seemed not to be inside the conditional probabilities of some partition, and this was called “nonconglomerability”. Jaynes again:
“Obviously, nonconglomerability cannot arise from a correct application of the rules of probability on finite sets. It cannot, therefore, occur in an infinite set which is approached as a well-defined limit of a sequence of finite sets. Yet nonconglomerability has become a minor industry, with a large and growing literature. There are writers who believe that it is a real phenomenon, and that they are proving theorems about the circumstances in which it occurs, which are important for the foundations of probability theory.”
We recently ran into a similar problem here on Overcoming Bias: A commenter cited a paper, “An Air-Tight Dutch Book” by Vann McGee, which purports to show that if your utility function is not bounded, then a dutch book can be constructed against you. The paper is gated, but Neel Krishnaswami passed me a copy. A summary of McGee’s argument can also be found in the ungated paper “Bayesianism, infinite decisions, and binding”.
Rephrasing somewhat, McGee’s argument goes as follows. Suppose that you are an expected utility maximizer and your utility function is unbounded in some quantity, such as human lives or proving math theorems. We’ll write $27 to indicate a quantity worth 27 units of utility; by the hypothesis of an unbounded utility function, you can always find some amount of fun that is worth at least 27 units of utility (where the reference unit can be any positive change in the status quo).
Two important notes are that, (1) this does not require your utility function to be linear in anything, just that it grow monotonically and without bound; and (2) your utility function does not have to assign infinite utility to any outcome, just ever-larger finite utilities to ever-larger finite outcomes.
Now for the seeming Dutch Book—a sequence of bets that (McGee argues) a Bayesian will take, but which produce a guaranteed loss.
McGee produces a fair coin, and proceeds to offer us the following bet: We lose $1 (one unit of utility) if the coin comes up “tails” on the first round, and gain $3 (three units of utility) if the coin comes up “heads” on the first round and “tails” on the second round. Otherwise nothing happens—the bet has no payoff. The probability of the first outcome in the bet is 1⁄2, so it has an expected payoff of -$.50; and the probability of the second outcome is 1⁄4, so it has an expected payoff of +$.75. All other outcomes have no payoff, so the net value is +$0.25. We take the bet.
Now McGee offers us a second bet, which loses $4 if the coin first comes up “tails” on the second round, but pays $9 if the coin first comes up “tails” on the third round, with no consequence otherwise. The probabilities of a fair coin producing a sequence that begins HT or HHT are respectively 1⁄4 and 1⁄8, so the expected values are -$1.00 and +$1.125. The net expectation is positive, so we take this bet as well.
Then McGee offers us a third bet which loses $10 if the coin first comes up “tails” on the third round, but gains $21 if the coin first comes up “tails” on the fourth round; then a bet which loses $22 if the coin shows “tails” first on round 4 and gains $45 if the coin shows “tails” first on round 5. Etc.
If we accept all these bets together, then we lose $1 no matter when the coin first shows “tails”. So, McGee says, we have accepted a Dutch Book. From which McGee argues that every rational mind must have a finite upper bound on its utility function.
Now, y’know, there’s a number of replies I could give to this. I won’t challenge the possibility of the game, which would be my typical response as an infinite set atheist, because I never actually encounter an infinity. I can imagine living in a universe where McGee actually does have the ability to increase his resources exponentially, and so could actually offer me that series of bets.
But if McGee is allowed to deploy a scenario where the expected value of the infinite sequence does not equal the limit of the expected values of the finite sequences, then why should a good Bayesian’s decision in the infinite case equal the limit of the Bayesian’s decisions in the finite cases?
It’s easy to demonstrate that for every finite N, a Bayesian will accept the first N bets. It’s also easy to demonstrate that for every finite N, accepting the first N bets has a positive expected payoff. The decision in every finite scenario is to accept all N bets—from which you might say that the “limit” decision is to accept all offered bets—and the limit of the expected payoffs of these finite decisions goes to +$.50.
But now McGee wants to talk about the infinite scenario directly, rather than as a limiting strategy that applies to any one of a series of finite scenarios. Jaynes would not let you get away with this at all, but I accept that I might live in an unbounded universe and I might just have to shut up and deal with infinite games. Well, if so, the expected payoff of the infinite scenario does not equal the limit of the expected payoffs of the finite scenarios. One equals -$1, the other equals +$.50.
So there is no particular reason why the rational decision in the infinite scenario should equal the limit of the rational decisions in the finite scenarios, given that the payoff in the infinite scenario does not equal the limit of the payoffs in the finite scenarios.
And from this, McGee wants to deduce that all rational entities must have bounded utility functions? If it turns out that I live in an infinite universe, you can bet that there isn’t any positive real number such that I would decline to have more fun than that.
Arntzenius, Elga, and Hawthorne give a more detailed argument in “Bayesianism, Infinite Decisions, and Binding” that the concept of provable dominance only applies to finite option sets and not infinite option sets. If you show me a compound planning problem with a sub-option X, such that for every possible compound plan I am better off taking X1 than X2, then this shows that a maximal plan must include X1 when the set of possible plans is finite. But when there is no maximal plan, no “optimal” decision—because there are an infinite number of possible plans whose upper bound (if any) isn’t in the set—then proving local dominance obviously can’t show anything about the “optimal” decision. See Arntzenius et. al’s section on “Satan’s Apple” for their full argument.
An even better version of McGee’s scenario, in my opinion, would use a different sequence of bets: -$1 on the first round versus +$6 on the second round; -$6 on the second round and +$20 on the third round; -$20 on the third round and +$56 on the fourth round. Now we’ve picked the sequence so that if you accept all bets up to the Nth bet, your expected value is $N.
So really McGee’s argument can be simplified as follows: Pick any positive integer, and I’ll give you that amount of money. Clearly you shouldn’t pick 1, because 1 is always inferior to 2 and above. Clearly you shouldn’t pick 2, because it’s always inferior to 3 and above. By induction, you shouldn’t pick any number, so you don’t get any money. So (McGee concludes) if you’re really rational, there must be some upper bound on how much you care about anything.
(Actually, McGee’s proposed upper bound doesn’t really solve anything. Once you allow infinite times, you can be put into the same dilemma if I offer you $.50, but then offer to trade $.50 today for $.75 tomorrow, and then, tomorrow, offer to trade $.75 now for $.875 the next day, and so on. Even if my utility function is bounded at $1, this doesn’t save me from problems where the limit of the payoffs of the finite plans doesn’t seem to equal the payoff of the limit of the finite plans. See the comments for further arguments.)
The meta-moral is that Bayesian probability theory and decision theory are math: the formalism provably follows from axioms, and the formalism provably obeys those axioms. When someone shows you a purported paradox of probability theory or decision theory, don’t shrug and say, “Well, I guess 2 = 1 in that case” or “Haha, look how dumb Bayesians are” or “The Art failed me… guess I’ll resort to irrationality.” Look for the division by zero; or the infinity that is assumed rather than being constructed as the limit of a finite operation; or the use of different implicit background knowledge in different parts of the calculation; or the improper prior that is not treated as the limit of a series of proper priors… something illegal.
Trust Bayes. Bayes has earned it.
- Eliezer’s Sequences and Mainstream Academia by 15 Sep 2012 0:32 UTC; 243 points) (
- The Second Law of Thermodynamics, and Engines of Cognition by 27 Feb 2008 0:48 UTC; 192 points) (
- Newcomb’s Problem and Regret of Rationality by 31 Jan 2008 19:36 UTC; 151 points) (
- The Weighted Majority Algorithm by 12 Nov 2008 23:19 UTC; 23 points) (
- A note on hypotheticals by 7 Aug 2009 18:56 UTC; 23 points) (
- 8 Aug 2009 23:25 UTC; 5 points) 's comment on Exterminating life is rational by (
- [SEQ RERUN] Trust in Bayes by 1 Jan 2012 4:20 UTC; 5 points) (
- 1 Mar 2009 21:20 UTC; 4 points) 's comment on Tell Your Rationalist Origin Story by (
- 6 Sep 2010 20:23 UTC; 2 points) 's comment on Less Wrong: Open Thread, September 2010 by (
- 26 Dec 2010 4:55 UTC; 2 points) 's comment on Pascal’s Gift by (
- 6 Aug 2009 21:25 UTC; 1 point) 's comment on Exterminating life is rational by (
- 31 Oct 2011 10:28 UTC; 0 points) 's comment on For The People Who Are Still Alive by (
- 19 Sep 2011 3:30 UTC; -1 points) 's comment on What Would You Do Without Morality? by (
“If it turns out that I live in an infinite universe, you can bet that there isn’t any positive real number such that I would decline to have more fun than that.”
A function can be bounded without having a maximum.
If I have a bounded utility function, there’s some amount of utility such that I can’t have more than that amount of utility. (In fact there’s an infinite number of such amounts, one of which is the least upper bound.) If no such funlimit exists, my utility function must not be bounded.
Steven, Eli said “there isn’t any positive real number...”, not “there isn’t any outcome...”.
Here’s a question for everyone:
As a human, I can’t introspect and look at my utility function, so I don’t really know if it’s bounded or not. If I’m not absolutely certain that it’s bounded, should I just assume it’s unbounded, since there is much more at stake in this case?
It’s a pity I consider my current utility function bounded; the statement “there’s no amount of fun F such that there isn’t a greater amount of fun G such that I would I would prefer a 100% chance of having fun F, to having a 50% chance of having fun G and a 50% chance of having no fun” would have been a catchy slogan for my next party.
other way around, I mean.
I notice I am confused.
I don’t understand. If your utility = 1 - e^(-fun), then your utility function is bounded, and yet “there isn’t any positive real number such that [you] would decline to have more fun than that”.
I certainly agree that in that case there are real-numbered utility values that are impossible to reach through any amount of fun, but you will never find yourself declining to have more fun, and so I don’t see how it’s relevant as an argument.
I was trying to use “fun” as a synonym for the amount of utility experienced in a given activity, I guess. Rolf Nelson put it better (modulo his correction); note also that you can substitute 53% and 52% for 100% and 50% in his example above, to avoid certainty effects.
Playing Sid Meier’s Alpha Centauri something I find very fun. I don’t find work fun at all.
Do you really think that I won’t stop playing Alpha Centauri in order to go to work?
You’ve profoundly misunderstood McGee’s argument, Eliezer. The reason you need the expectation of the sum of an infinite number of random variables to equal the sum of the expectations of those random variable is exactly to ensure that choosing an action based on the expected value actually yields an optimal course of action.
McGee observed that if you have an infinite event space and unbounded utilities, there are a collection of random utility functions U1, U2, … such that E(U1 + U2 + …) != E(U1) + E(U2) + …. McGee then observes that if you restrict utilities to a bounded range, then in fact E(U1 + U2 + …) == E(U1) + E(U2) + …, which ensures that a series of choices based on the expected value always give the correct result. In contrast, the other paper—which you apparently approve of—happily accepts that when E(U1 + U2 + …) != E(U1) + E(U2) + …, an agent can be Dutch booked and defends this as still rational behavior.
Right now, you’re a “Bayesian decision theorist” who a) doesn’t believe in making choices based on expected utility, and b) accepts Dutch Books as rational. This is goofy.
Neel, just because a hack solves a problem doesn’t mean that that particular hack is The Solution and The Only Solution.
The problem here is that McGee is summing an infinite series and getting a value of −1 when the sums of the finite series approach positive infinity.
If you’re willing to concede this in the first place—and a lot of mathematicians will simply refuse to sum the series in the first place—then the obvious approach is to say that infinite decisions don’t have to approach the limit of finite decisions.
If you introduce an artificial bound on the utility function, as a hack, then you are simply implementing the wrong morality: A morality that will—at some point—trade off a 50% probability of having a decillion units of fun against a 49.99% probability of having 3^^^3 units of fun, which seems simply stupid to me.
All I need to do is say, “In infinite decisions whose payoffs are not the limit of the payoffs of a series of finite decisions, I’m going to keep my utility function (that is, make decisions according to my actual morality instead of some other morality) but I’m not necessarily going to make infinite decisions that look like the limit of my decisions in the finite cases.”
McGee’s dilemma does not have an “optimal” solution. No matter how many of McGee’s bets you take, you can always take one more bet and expect an even higher payoff. It’s like asking for the largest integer. There isn’t one, and there isn’t an optimal plan in McGee’s dilemma.
On day 1 I give you a dollar, but offer to trade it for $2 on day 2. On day 2 I offer to trade the $2 for $3 on day 3. If I let you continue this situation forever, then there is no maximal plan, and the limit of the plans in any finite case never gets the money. So you can’t maximize; just pick a number.
Don’t tell me the solution is to bound my utility function. That doesn’t even solve anything. If your utility function has an upper bound at $1 then I can offer to trade you $.50 for $.75 the next day, and $.75 for $.875 the day after, and so on, and you’ve got exactly the same problem: the limit of the behavior for the best finite plans, does not yield good behavior in the infinite plan.
Added: Just noted that McGee’s original formulation used a scenario whose limiting payoff was $.50, not +infinity, and also that the formula I used in the original blog post was not consistent. This has been corrected. You can construct finite-payoff or infinite-payoff versions of McGee’s dilemma.
Fun fact: In the finite version of McGee’s dilemma, if I take all the odd-numbered bets in the sequence (the first bet $-1 versus $+3, the third bet $-10 versus $+21, etc.) the expected value of my bet approaches a limit of $+1/3, and if I take all the even numbered bets the expected value of my bets approaches a limit of $+1/6. If I take all the bets, this approaches a limit of $+1/2, but according to McGee has an actual expected value of $-1. Or in the infinite-expected-payoff version, McGee has +infinity + +infinity = −1. Never mind having the expectation of a sum of an infinite number of variables not equalling the sum of the expectations; here we have the expectation of the sum of two bets not equalling the sum of the expectations.
If McGee is allowed to do that—who knows, maybe time is infinite and so is physics—then I’m allowed to have a rational Bayesian’s infinite strategy not look like their finite strategy.
Infinities are thorny problems for any decision theory—Nick Bostrom has a large paper on this—but as said, using a bounded utility function doesn’t solve anything. I can still offer you an infinite series of swaps over infinite time that delays your payoff into the indefinite future, so that the upper bound of the plans is not in the set of plans and there is no optimal (non-dominated) decision.
The fact that one gets contradictions if one assumes that an infinity is given rather than constructing it as a limit is evidence in favor of “infinite set atheism.” For example, in Eliezer’s example from Jaynes, the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked.
The most natural thing to conclude is that infinities cannot exist in reality. However, sometimes the most natural conclusion is false, so I’m not sure of this.
Interesting post, thanks for the Jaynes link. Related book which is a great read is Szekely’s Paradoxes in Probability Theory and Statistics
I think the most intriguing paradoxes are the ones that experts can not agree how to resolve. For instance, take the two envelope paradox: you are presented with two envelopes, one has twice as much money as the other. You are told that first envelope contains x dollars, which envelope should you choose? From expected value calculations, the other envelope has $1.25x which is larger regardless of x. Turns out that the paradoxical “always pick the other one” solution comes out even if we introduce a proper prior on the amounts in envelopes
I’ve read a pretty good resolution to the two envelopes problem.
You have to have some prior distribution over the possible amounts of money in the envelopes.
The expected value of switching is equal to 1⁄2xP(I have the envelope with more money | I opened an envelope containing x dollars) + 2xP(I have the envelope with less money | I opened an envelope containing x dollars). This means that, once you know what x is, if you think that you have less than a 2⁄3 chance of having the envelope with more money, you should switch.
According to what I read, if your prior is such that there is no finite X for which you would decide that you have less than a 2⁄3 chance of having the envelope with more money, then your expected value for x, before you learned what it was, was infinite—and if you were expecting an infinite amount of money, then of course any finite value is disappointing. (Note that having a proper prior doesn’t keep you from expecting an infinite value for x). So it doesn’t matter what you actually find in the envelope, once you find out what it is, you should switch—but there’s no reason to switch before you open at least one envelope.
rstarkov wrote a nice discussion piece on the two envelopes problem: Solving the two envelopes problem. thomblake commented that the error most people make with this problem is treating the amounts of money in the envelopes as fixed values when calculating the expectation.
“the real reason for the paradox is that it is completely impossible to pick a random integer from all integers using a uniform distribution: if you pick a random integer, on average lower integers must have a greater probability of being picked”
Isn’t there a simple algorithm which samples uniformly from a list without knowing it’s length? Keywords: ‘reservoir sampling.’
Gray Area:
On an infinite stream of items, reservoir sampling could still not be used, because for the distribution to be uniform the complete set has to be evaluated. A program to pick a random integer this way would never terminate.
Eliezer, I’m curious about your reaction to Nick’s statement in the paper about infinities and ethics you linked to, namely “But it is definitely not reasonable to assume that we do not live in a canonically infinite cosmos; and that is all we need here. Any ethical theory that fails to cope with this empirical contingency must be rejected.”
Eliezer: Never mind having the expectation of a sum of an infinite number of variables not equalling the sum of the expectations; here we have the expectation of the sum of two bets not equalling the sum of the expectations.
If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can’t pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you could is a perfect illustration of the point; if you had a collection of bets in which you could do this, then no limit-based Dutch book is possible.
Ensuring that this property holds necessarily restricts the possible shapes of a utility function. We need to bound the utility function to avoid St. Petersburg-style problems, but the addition of time adds another infinite dimension to the event space, so we need to ensure that expectations of infinite sums of random variables indexed by time are also equal to the sum of the expectations. For example, one familiar way of doing this is to assume a time-separable, discounted utility function. Then you can’t force an agent into infinitely delayed gratification, because there’s a bounded utility and a minimum, nonzero payment to delay the reward—at some point, you run out of the space you need to force delay.
If you’re thinking that the requirement that expected utility actually works puts very stringent limits on what forms of utility functions you can use—you’re probably right. If you think that philosophical analyses of rationality can justify a wider selection utility functions or preference relations -- you’re probably still right. But the one thing you can’t do is to pick a function of the second kind and still insist that ordinary decision-theoretic methods are valid with it. Decision theoretic methods require utility to form a proper random variable. If your utility function can’t satisfy this need, you can’t use decision theoretic methods with it.
“Experienced utility” sounds to me like a category error. Utilities are equivalent to preferences, and you prefer A over B iff you are disposed (given what your mind is like, or what it would be like with extra knowledge, or whatever) to choose A over B. You can’t experience a disposition.
Rolf Nelson put it better (modulo his correction); note also that you can substitute 53% and 52% for 100% and 50% in his example above, to avoid certainty effects.
That’s a different intuition than the “declining to have more fun” one, and IMHO not nearly as obvious. An intuition in the opposite direction (which I think Rolf agrees with) is that once you reach giant tentacled squillions of units of fun, specifying when/where it happens takes just as much algorithmic complexity as making up a mind from scratch (or interpreting it from a rock).
As a human, I can’t introspect and look at my utility function, so I don’t really know if it’s bounded or not. If I’m not absolutely certain that it’s bounded, should I just assume it’s unbounded, since there is much more at stake in this case?
I don’t know, but it feels wrong. Similar issue: is anyone here more than (1 − 1/avogadro) certain that atoms don’t experience joy and suffering?
Eliezer, I’m curious about your reaction to Nick’s statement in the paper about infinities and ethics you linked to, namely “But it is definitely not reasonable to assume that we do not live in a canonically infinite cosmos; and that is all we need here. Any ethical theory that fails to cope with this empirical contingency must be rejected.”
In my line of work, this works out to: “If you don’t know for certain that you don’t live an infinite cosmos, don’t build a Friendly AI that will kersplode if you do.” So yes, I must professionally agree with Nick, though as an infinite set atheist, my life would be a lot simpler if I didn’t have to.
If you have an alternating series which is conditionally but not absolutely convergent, the Riemann series theorem says that reordering its terms can change the result, or force divergence. So you can’t pull a series of bets apart into two series, and expect their sums to equal the sum of the original. But the fact that you assumed you could
I didn’t assume I could. I was complaining about the fact that I couldn’t.
Neel, please justify the statement that a Bayesian must always choose the same option in the infinite case that would dominate if the options were finite. Arntzenius refute this, in what seems to me a fairly rigorous fashion, but you said that they “accepted the Dutch book”—you didn’t say how. You keep insisting that a Bayesian must do this or that, and you are not justifying it, and it seems to me to be simply wrong.
I can have an unbounded utility function, live in an infinite universe, and simply not accept all of McGee’s bets. I could just accept the first 3^^^3 of them, and do pretty well.
I then fail to maximize, yes, but there is no optimal plan in this case, any more than there’s a largest integer. See also Arntzenius et. al’s case of Satan’s Apple.
If you just go on insisting that I, as a Bayesian, am committed to certain actions, without justifying your statement or explaining the flaw in Arntzenius’s math; then I may have to simply write you off as unresponsive, but hopefully not persuading anyone except yourself; unless another commenter indicates that they agree with you, in which case I will accept that your arguments are not so obviously unfinished as they appear. I do have to triage my time here, and it seems to me that you are not responding to my arguments, except to just flatly assert that a Bayesian must do certain things which neither I nor Arntzenius et. al. think a Bayesian must do.
I 100% certain that atoms don’t experience anything. And i’m equally sure that I wont be bitten on the bum by a black swan either.
Yes, the inability to name a largest number seems to underlie the infinity utility paradoxes. Which is to say, they aren’t really paradoxes of utility unless one believes that “name a number and I’ll give you that many dollars” is also a paradox of utility. (Or ”...and I’ll give you that many units of utility”)
It’s true that the genie can always correct the wisher by pointing out that the wisher could have accepted one more offer, but in the straightforward “X dollars” example the genie can also always correct the wisher along the same lines by naming a larger number of dollars that he could have asked for.
It doesn’t prove that the wisher doesn’t want to maximize utility, it proves that the wisher cannot name a largest number, which isn’t about his preferences.
@Peter As a human, I can’t introspect and look at my utility function, so I don’t really know if it’s bounded or not. If I’m not absolutely certain that it’s bounded, should I just assume it’s unbounded, since there is much more at stake in this case?
This has been gnawing at my brain for a while. If the useful Universe is temporally unbounded, then utility arguably goes to aleph-null. Some MWI-type models and Ultimate-ensemble models arguably give you an uncountable number of copies of yourself, does that count as greater than than aleph-null or less than aleph-null (because we normalize to a measure [0, 1] that “looks” small)? What if someone claims “the Universe is spatially finite, but everyone has an inaccessible cardinal number of invisible copies of themselves?” Given my ignorance and confusion, maybe it makes sense to pick the X most credible utility measures, and give them each an “equal vote” in deciding what to do next at each stage, as a current interim measure. This horrendous muddled compromise is itself non-utilitarian and sub-optimal, but I personally don’t have a better answer at the moment.
I used to think of my utility function as unbounded, and then after Eliezer’s “Pascal’s Mugging” post I thought of it as probably bounded. This decision changed the way I live my life… not at all. However, I can understand that if you want to instruct an AGI, you may not be able to allow yourself the luxury of such blissful agnosticism.
@Stephen An intuition in the opposite direction (which I think Rolf agrees with) is that once you reach giant tentacled squillions of units of fun, specifying when/where it happens takes just as much algorithmic complexity as making up a mind from scratch (or interpreting it from a rock).
Alas I’m not completely sure what you’re talking about, the secret decoder ring says “fun = utility” but I think I require an additional cryptogram clue. Is this a UDASSA reference?
Gray Area—good question, thanks for bring my attention to reservoir sampling. I found a compact description of it in Devroye’s “Non-Uniform …” and for sampling just 1 integer x it looks as follows
At step 1, let x=1
At step k, let x=k with probability 1/k
sum_i 1/i diverges, this means x will never stop growing
pdf23ds—I think you can use reservoir sampling for sampling from infinite streams, an interesting question is when it works. For instance, consider an infinite stream of IID bits, 1-element reservoir sampling converges after 1 step. An interesting question is when exactly it works—my intuition is that it works whenever the stream has finite entropy, and a stationary Markov property
Suhhhhhweet! I am so taking that.
I was just looking for the money shot from the Jaynes paper EY refers to, and one of the links brought me here. Long time no see.
Since I’m here, here’s the money shot:
ET Jaynes—CHAPTER 15 PARADOXES OF PROBABILITY THEORY
How to Mass Produce Paradoxes
Having examined a few paradoxes, we can recognize their common feature. Fundamentally, the procedural error was always failure to obey the product and sum rules of probability theory. Usually, the mechanism of this was careless handling of infinite sets and limits, sometimes accompanied also by attempts to replace the rules of probability theory by intuitive ad hoc devices like B2′s ‘reduction principle’.
Indeed, paradoxes caused by careless dealing with infinite sets or limits can be mass produced by the following simple procedure:
(1) Start from a mathematically well-defined situation, such as a infinite set or a normalized probability distribution or a convergent integral, where everything is well behaved and there is no question about what is the correct solution.
(2) Pass to a limit—infinite magnitude, infinite set, zero measure, improper pdf , or some other kind without specifying how the limit is approached.
(3) Ask a question whose answer depends on how the limit was approached.
...
Our conclusion based on some forty years of mathematical efforts and experience with real problems is that, at least in probability theory, an infinite set should be thought of only as the limit of a specific (i.e. unambiguously specifed) sequence of finite sets. Likewise, an improper pdf has meaning only as the limit of a well defined sequence proper pdfs. The mathematically generated paradoxes have been found only when we tried to depart from this policy by treating an infinite limit as something already accomplished, without regard to any limiting operation. Indeed, experience to date shows that almost any attempt to depart from our recommended infinite sets’ policy has the potentiality for generating a paradox, in which two equally valid methods of reasoning lead us to contradictory results.
*****
David Wolpert of “No Free Lunch” Theorems and Stacked Generalization had something similar, a Declaration of Independence from Infinite Sets, roughly “This works for finite sets. The extension to infinite sets is left as an exercise to the interested reader.”
------
I protest against the use of infinite magnitude as something accomplished, which is never permissible in mathematics. Infinity is merely a figure of speech, the true meaning being a limit.
-- C. F. Gauss