A plan for Pascal’s mugging?
The idea is to compare not the results of actions, but the results of decision algorithms. The question that the agent should ask itself is thus:
“Suppose everyone1 who runs the same thinking procedure like me uses decision algorithm X. What utility would I get at the 50th percentile (not: what expected utility should I get), after my life is finished?”
Then, he should of course look for the X that maximizes this value.
Now, if you formulate a turing-complete “decision algorithm”, this heads into an infinite loop. But suppose that “decision algorithm” is defined as a huge table for lots of different possible situations, and the appropriate outputs.
Let’s see what results such a thing should give:
If the agent has the possibility to play a gamble, and the probabilities involved are not small, and he expects to be allowed to play many gambles like this in the future, he should decide exactly as if he was maximizing expected utility: If he has made many decisions like this, he will get a positive utility difference in the 50th percentile if and only if his expected utility from playing the gamble is positive.
However, if Pascal’s mugger comes along, he will decline: The complete probability of living in a universe where people like this mugger ought to be taken seriously is small. In the probability distribution over expected utility at the end of the agent’s lifetime, the possibility of getting tortured will manifest itself only very slightly at the 50th percentile—much less than the possibility of losing 5 Dollars.
The reason why humans will intuitively decline to give money to the mugger might be similar: They imagine not the expected utility with both decisions, but the typical outcome of giving the mugger some money, versus declining to.
1I say this to make agents of the same type cooperate in prisoner-like dilemmas.
What happens if you’re using this method and you’re offered a gamble where you have a 49% chance of gaining 1000000utils and a 51% chance of losing 5utils (if you don’t take the deal you gain and lose nothing). Isn’t the “typical outcome” here a loss, even though we might really really want to take the gamble? Or have I misunderstood what you propose?
Depending on the rest of your utility distribution, that is probably true. Note, however, that an additional 10^6 utility in the right half of the utility function will change the median outcome of your “life”: If 10^6 is larger than all the other utility you could ever receive, and you add a 49 % chance of receiving it, the 50th percentile utility after that should look like the 98th percentile utility before.
Could you rephrase this somehow? I’m not understanding it. If you actually won the bet and got the extra utility, your median expected utility would be higher, but you wouldn’t take the bet, because your median expected utility is lower if you do.
In such a case, the median outcome of all agents will be improved if every agent with the option to do so takes that offer, even if they are assured that it is a once/lifetime offer (because presumably there is variance of more than 5 utils between agents).
But the median outcome is losing 5 utils?
Edit: Oh, wait! You mean the median total utility after some other stuff happens (with a variance of more than 5 utils)?
Suppose we have 200 agents, 100 of which start with 10 utils, the rest with 0. After taking this offer, we have 51 with −5, 51 with 5, 49 with 10000, and 49 with 10010. The median outcome would be a loss of −5 for half the agents, a gain of 5 for half, but only the half that would lose could actually get that outcome…
And what do you mean by “the possibility of getting tortured will manifest itself only very slightly at the 50th percentile”? I thought you were restricting yourself to median outcomes, not distributions? How do you determine the median distribution?
I don’t. I didn’t write that.
Your formulation requires that there be a single, high probability event that contributes most of the utility an agent has the opportunity to get over its lifespan. In situations where this is not the case (e.g. real life), the decision agent in question would choose to take all opportunities like that.
The closest real-world analogy I can draw to this is the decision of whether or not to start a business. If you fail (which there is a slightly more than 50% chance you will), you are likely to be in debt for quite some time. If you succeed, you will be very rich. This is not quite a perfect analogy, because you will have more than one chance in your life to start a business, and the outcomes of business ownership are not orders of magnitude larger than the outcomes in real life. However, it is much closer than the “51% chance to lose $5, 49% chance to win $10000” that your example intuitively brings to mind.
Ah! Sorry for the mixed-up identities. Likewise, I didn’t come up with that “51% chance to lose $5, 49% chance to win $10000” example.
But, ah, are you retracting your prior claim about a variance of greater than 5? Clearly this system doesn’t work on its own, though it still looks like we don’t know A) how decisions are made using it or B) under what conditions it works. Or in fact C) why this is a good idea.
Certainly for some distributions of utility, if the agent knows the distribution of utility across many agents, it won’t make the wrong decision on that particular example by following this algorithm. I need more than that to be convinced!
For instance, it looks like it’ll make the wrong decision on questions like “I can choose to 1) die here quietly, or 2) go get help, which has a 1⁄3 chance of saving my life but will be a little uncomfortable.” The utility of surviving presumably swamps the rest of the utility function, right?
Ah, it appears that I’m mixing up identities as well. Apologies.
Yes, I retract the “variance greater than 5”. I think it would have to be variance of at least 10,000 for this method to work properly. I do suspect that this method is similar to decision-making processes real humans use (optimizing the median outcome of their lives), but when you have one or two very important decisions instead of many routine decisions, methods that work for many small decisions don’t work so well.
If, instead of optimizing for the median outcome, you optimized for the average of outcomes within 3 standard deviations of the median, I suspect you would come up with a decision outcome quite close to what people actually use (ignoring very small chances of very high risk or reward).
This all seems very sensible and plausible!
A bounded utility function, on which increasing years of happy life (or money, or whatever) give only finite utility in the infinite limit, does not favor taking vanishing probabilities of immense payoffs. It also preserves normal expected utility calculations so that you can think about 90th percentile and 10th percentile, and lets you prefer higher payoffs in probable cases.
Basically, this “median outcome” heuristic looks like just a lossy compression of a bounded utility function’s choice outputs, subject to new objections like APMason’s. Why not just go with the bounded utility function?
I want that it is possible to have a very bad outcome: If I can play a lottery that has 1 utilium cost, 10^7 payoff and a winning chance of 10^-6, and if I can play this lottery enough times, I want to play it.
“Enough times” to make it >50% likely that you will win, yes? Why is this the correct cutoff point?
Yes that is a good insight. I’ll rephrase it to perhaps make it clear to a somewhat different set of people. If your strategy is to have a good median outcome of your life, you will still get to bet on longshots with high payoffs, as long as you expect to be offered a lot of those bets. The fewer bets you expect to be offered of a certain type, the more likely winning must be for you to take it, even if the “expected” pay out on these is very high.
A quantification of this concept in somewhat simple cases was done by Jim Kelly and is called the Kelly Criterion. Kelly asked a question: given you have finite wealth, how do you decide how much to bet on a given offered bet in order to maximize the rate at whcih your expected wealth grows? Kelly’s criterion, if followed, also has the side-effect of insuring you never go completely broke, but in a world of minimum bet sizes, you might go broke enough to not be allowed to play anymore.
Of course, all betting strategies, where you are betting against other presumed rational actors, require you to be smarter, or at least more correct thant the people you are betting against, in order to allow you to win. In Kelly’s calculation, the size of your bet depends on both the offered odds and the “true” odds. So how do you determine the true odds? Well that is left as an exercise for the reader!
And so it goes with Pascal’s muggings. As far as my study has taken me, I know of no way to reliably estimate whether the outcomes in offered in Pascal’s muggings are one in a million, one in a google, one in a googleplex, or one in 3^^^3. And yet the “correct” amount to bet using the Kelly criterion will vary by as big a factor as those probability estimates vary one from the other.
There is also the result that well-known cognitive biases will cause you to get infinitesimal probabilities wrong by many orders of magnitude, without properly estimating your probable error on them. For any given problem, there is some probability estimate below which all further attempts to refine the estimate are in the noise: the probability is “essentially zero.” But all the bang in constantly revisiting these scenarios comes from the human biases that allow us to think that because we can state a number like 1 in a million or 1 in a google or 1 in 3^^^3 that we must be able to use it meaningfully in some probabilistic calculation.
If you are of the bent that hypotheses such as the utility of small probabilities should be empirically checked before you start believing the results of these calculations, it may take a few lifetimes of the universe (or perhaps a google lifetimes of the universe) before you have enough evidence to determine whether a calculation involving a number like 1 in a google means anything at all.
Googol. Likewise, googolplex.
The Kelly criterion doesn’t maximize expected wealth, it maximizes expected log wealth, as the article you linked mentions:
Suppose that I can make n bets, each time wagering any proportion of my bankroll that I choose and then getting three times the wagered amount if a fair coin comes out Heads, and losing the wager on Tails. Expected wealth is maximized if I always bet the entire bankroll, with an expected wealth of (initial bankroll)(3^n)(the probability of all Heads=2^-n). The Kelly criterion trades off from that maximum expected wealth in favor of log wealth.
A utility function that goes with log wealth values gains less, but it also values losses much more, with insane implications at the extremes. With log utility, multiplying wealth by a 1,000,000 has the same marginal utility whatever your wealth, and dividing wealth by 1,000,000 has the negative of that utility. Consider these two gambles:
Gamble 1) Wealth of $1 with certainty.
Gamble 2) Wealth of $0.00000001 with 50% probability, wealth of $1,000,000 with 50% probability.
Log utility would favor $1, but for humans Gamble 2 is clearly better; there is very little difference for us between total wealth levels of $1 and a millionth of a cent.
Worse, consider these gambles:
Gamble 3) Wealth of $0.000000000000000000000000001 with certainty.
Gamble 4) Wealth of $1,000,000,000 with probability (1-1/3^^^3) and wealth of $0 with probability 1/3^^^3
Log utility favors Gamble 3, since it assigns $0 wealth infinite negative utility, and will sacrifice any finite gain to avoid it. But for humans Gamble 4 is vastly better, and a 1/3^^^3 chance of bankruptcty is negligibly worse than wealth of $1. Every day humans drive to engage in leisure activities, eat pleasant but not maximally healthful foods, and otherwise accept small, go white-water rafting, and otherwise accept small (1 in 1,000,000, not 1 in 3^^^3) probabilities of death for local pleasure and consumption.
This is not my utility function. I have diminishing utility over a range of wealth levels, which log utility can represent, but it weights losses around zero too highly, and still buys a 1 in 10^100 chance of $3^^^3 in exchange for half my current wealth if no higher EV bets are available, as in Pascal’s Mugging.
Abuse of a log utility function (chosen originally for analytical convenience) is what led Martin Weitzman astray in his “Dismal Theorem” analysis of catastrophic risk, suggesting that we should pay any amount to avoid zero world consumption (and not on astronomical waste grounds or the possibility of infinite computation or the like, just considering the limited populations Earth can support using known physics).
The original justification for the Kelly criterion isn’t that it maximizes a utility function that’s logarithmic in wealth, but that it provides a strategy that, in the infinite limit, does better than any other strategy with probability 1. This doesn’t mean that it maximizes expected utility (as your examples for linear utility show), but it’s not obvious to me that the attractiveness of this property comes mainly from assigning infinite negative value to zero wealth, or that using the Kelly criterion is a similar error to the one Weitzman made.
Yes, if we have large populations of “all-in bettors” and Kelly bettors, then as the number of bets increase the all-in bettors lead in total wealth increases exponentially, while the probability of an all-in bettor being ahead of a Kelly bettor falls exponentially. And as you go to infinity the wealth multiplier of the all-in bettors goes to infinity, while the probability of an all-in bettor leading a Kelly bettor goes to zero. And that was the originally cited reasoning.
Now, one might be confused by the “beats any other constant bankroll allocation (but see the bottom paragraph) with probability 1” and think that it implies “bettors with this strategy will make more money on average than those using other strategies,” as it would in a finite case if every bettor using one strategy did better than any bettor using any other strategy.
But absent that confusion, why favor probability of being ahead over wealth unless one has an appropriate utility function? One route is log utility (for which Kelly is optimal), and I argued against it as psychologically unrealistic, but I agree there are others. Bounded utility functions would also prefer the Kelly outcome to the all-in outcome in the infinite limit, and are more plausible than log utility.
Also, consider strategies that don’t allocate a constant proportion in every bet, e.g. first do an all-in bet, then switch to Kelly. If the first bet has a 60% chance of tripling wealth and a 40% chance of losing everything, then the average, total, and median wealth of these mixed-strategy bettors will beat the Kelly bettors for any number of bets in a big population. These don’t necessarily come to mind when people hear loose descriptions of Kelly.
Sure, I don’t see anything here to disagree with.
Please enlighten a poor Physicist. You write:
I thought the log function operating on real positive numbers was real and monotonically increasing with wealth.
I thought wealth for the purposes of the wikipedia article and Kelly criterion calculations was real and positive.
So how can something which is said to maximize log(wealth) not also be said to maximize wealth with identical meaning?
Seriously, if there is some meaningful sense in which something that maximizes log(wealth) does not also maximize wealth, I am at a loss to even guess what it is and would appreciate being enlightened.
I gave several examples in my comment, but here’s another with explicitly calculated logs:
1) $100 with certainty. Log base 10 is 2. So expected log wealth 2, expected wealth $100.
2) 50% chance of $1, for a log of 0. 50% chance of $1,000 with a log of 3. Expected log wealth is therefore ((0+3)/2)=1.5, and expected wealth is ($1+$1000)/2=$500.5.
1) has higher expected log wealth, but 2) has higher expected wealth.