A log utility function says you should play this game over and over again, but you will most definitely lose all of your money by playing it (with the slim potential for wonderful returns).
No, one thing that absolutely will not happen is losing all your money. You literally can’t.
Though yes, the conclusion that utility is not logarithmic in money without restriction does hold. If on the second gamble you “win” 10^2174 dollars (as the formula implies), what value does that have? At best you’re now going to jail for violating some very serious currency laws, at worst you get it in gold or something and destroy the universe with its gravitational pull. Somewhere in the middle, you destroy the world economy and make a lot of people very unhappy, and probably murderously angry at you.
In no circumstance are you actually going to be able to benefit from “winning” 10^2174 dollars. Even if you somehow just won complete control over the total economic activity of Earth, that’s probably not worth more than 10^14 dollars and so you should reject this bet.
But since this a ridiculous hypothetical in the first place, what if it’s actually some God-based currency that just happens to coincide with Earth currency for small amounts, and larger quantities do in fact somehow continue with unbounded utility? To the extent where a win on the second bet gives you some extrapolation of the benefits of 10^2160 planets worth of economic output, and wins on the later bets are indescribably better still? Absolutely take those bets!
I recommend Ole Peters’ papers on the topic. That way you won’t have to construct your epicycles upon the epicicles commonly know as utility calculus.
We are taught to always maximize the arithmetic mean
By whom?
The probable answer is: By economists.
Quite simply: they are wrong. Why?
That’s what ergodicity economics tries to explain.
In brief, economics typically wrongly assumes that the average over time can be substituted with the average over an ensemble.
Ergodicity economics shows that there are some +EV bets, that do not pay off for the individual
For example you playing a bet of the above type 100 times is assumed to be the same than 100 people each betting once.
This is simply wrong in the general case. For a trivial example, if there is a minimum bet, then you can simply go bankrupt before playing 100 games
Interestingly however, if 100 people each bet once, and then afterwards redistribute their wealth, then their group as a whole is better off than before. Which is why insurance works
And importantly, which is exactly why cooperation among humans exists. Cooperation that, according to economists, is irrational, and shouldn’t even exist.
Anyway I’m butchering it. I can only recommend Ole Peters’ papers
expected value return of 2∗.51+0∗.49=1.02. It’s clear that if you play this game long enough, you will wind up with nothing.
That’s only clear if you define “long enough” in a perverse way. For any finite sequence of bets, this is positive value. Read SBF’s response more closely—maybe you have an ENORMOUSLY valuable existence.
tl;dr: it depends on whether utility is linear or sublinear in aggregation. Either way, you have to accept some odd conclusions.
For most resources and human-scale gambling, the units are generally assumed to have declining marginal value, most often modeled as logarithmic in utility. In that case, you shouldn’t take the bet, as log(2) .51 + log(0) * .49 is negative infinity. But if you’re talking about multi-universe quantities of lives, it’s not obvious whether their value aggregates linearly or logarithmically. Is a net new happy person worth less than or exactly the same as an existing identically-happy person? Things get weird when you take utility as an aggregatable quantity.
Personally, I bite the bullet and claim that human/sentient lives decline in marginal value. This is contrary to what most utilitarians claim, and I do recognize that it implies I prefer fewer lives over more in many cases. I additionally give some value to variety of lived experience, so a pure duplicate is less utils in my calculations than a variant.
But that doesn’t seem to be what you’re proposing. You’re truncating at low probabilities, but without much justification. And you’re mixing in risk-aversion as if it were a real thing, rather than a bias/heuristic that humans use when things are hard to calculate or monitor (for instance, any real decision has to account for the likelihood that your payout matrix is wrong, and you won’t actually receive the value you’re counting on).
Probably, but precision matters. Mixing up mean vs sum when talking about different quantities of lives is confusing. We do agree that it’s all about how to convert to utilities. I’m not sure we agree on whether 2x the number of equal-value lives is 2x the utility. I say no, many Utilitarians say yes (one of the reasons I don’t consider myself Utilitarian).
game which maximizes log utility and still leaves you with nothing in 99% of cases.
Again, precision in description matters—that game maximizes log wealth, presumed to be close to linear utility. And it’s not clear that it shows what you think—it never leaves you nothing, just very often a small fraction of your current wealth, and sometimes astronomical wealth. I think I’d play that game quite a bit, at least until my utility curve for money flattened even more than simple log, due to the fact that I’m at least in part a satisficer rather than an optimizer on that dimension. Oh, and only if I could trust the randomizer and counterparty to actually pay out, which becomes impossible in the real world pretty quickly.
But that only shows that other factors in the calculation interfere at extreme values, not that the underlying optimization (maximize utility, and convert resources to utility according to your goals/preferences/beliefs) is wrong.
No, one thing that absolutely will not happen is losing all your money. You literally can’t.
Though yes, the conclusion that utility is not logarithmic in money without restriction does hold. If on the second gamble you “win” 10^2174 dollars (as the formula implies), what value does that have? At best you’re now going to jail for violating some very serious currency laws, at worst you get it in gold or something and destroy the universe with its gravitational pull. Somewhere in the middle, you destroy the world economy and make a lot of people very unhappy, and probably murderously angry at you.
In no circumstance are you actually going to be able to benefit from “winning” 10^2174 dollars. Even if you somehow just won complete control over the total economic activity of Earth, that’s probably not worth more than 10^14 dollars and so you should reject this bet.
But since this a ridiculous hypothetical in the first place, what if it’s actually some God-based currency that just happens to coincide with Earth currency for small amounts, and larger quantities do in fact somehow continue with unbounded utility? To the extent where a win on the second bet gives you some extrapolation of the benefits of 10^2160 planets worth of economic output, and wins on the later bets are indescribably better still? Absolutely take those bets!
Are you familiar with ergodicity economics?
https://twitter.com/ole_b_peters/status/1591447953381756935?cxt=HHwWjsC8vere-pUsAAAA
I recommend Ole Peters’ papers on the topic. That way you won’t have to construct your epicycles upon the epicicles commonly know as utility calculus.
By whom?
The probable answer is: By economists.
Quite simply: they are wrong. Why?
That’s what ergodicity economics tries to explain.
In brief, economics typically wrongly assumes that the average over time can be substituted with the average over an ensemble.
Ergodicity economics shows that there are some +EV bets, that do not pay off for the individual
For example you playing a bet of the above type 100 times is assumed to be the same than 100 people each betting once.
This is simply wrong in the general case. For a trivial example, if there is a minimum bet, then you can simply go bankrupt before playing 100 games
Interestingly however, if 100 people each bet once, and then afterwards redistribute their wealth, then their group as a whole is better off than before. Which is why insurance works
And importantly, which is exactly why cooperation among humans exists. Cooperation that, according to economists, is irrational, and shouldn’t even exist.
Anyway I’m butchering it. I can only recommend Ole Peters’ papers
That’s only clear if you define “long enough” in a perverse way. For any finite sequence of bets, this is positive value. Read SBF’s response more closely—maybe you have an ENORMOUSLY valuable existence.
tl;dr: it depends on whether utility is linear or sublinear in aggregation. Either way, you have to accept some odd conclusions.
For most resources and human-scale gambling, the units are generally assumed to have declining marginal value, most often modeled as logarithmic in utility. In that case, you shouldn’t take the bet, as log(2) .51 + log(0) * .49 is negative infinity. But if you’re talking about multi-universe quantities of lives, it’s not obvious whether their value aggregates linearly or logarithmically. Is a net new happy person worth less than or exactly the same as an existing identically-happy person? Things get weird when you take utility as an aggregatable quantity.
Personally, I bite the bullet and claim that human/sentient lives decline in marginal value. This is contrary to what most utilitarians claim, and I do recognize that it implies I prefer fewer lives over more in many cases. I additionally give some value to variety of lived experience, so a pure duplicate is less utils in my calculations than a variant.
But that doesn’t seem to be what you’re proposing. You’re truncating at low probabilities, but without much justification. And you’re mixing in risk-aversion as if it were a real thing, rather than a bias/heuristic that humans use when things are hard to calculate or monitor (for instance, any real decision has to account for the likelihood that your payout matrix is wrong, and you won’t actually receive the value you’re counting on).
Probably, but precision matters. Mixing up mean vs sum when talking about different quantities of lives is confusing. We do agree that it’s all about how to convert to utilities. I’m not sure we agree on whether 2x the number of equal-value lives is 2x the utility. I say no, many Utilitarians say yes (one of the reasons I don’t consider myself Utilitarian).
Again, precision in description matters—that game maximizes log wealth, presumed to be close to linear utility. And it’s not clear that it shows what you think—it never leaves you nothing, just very often a small fraction of your current wealth, and sometimes astronomical wealth. I think I’d play that game quite a bit, at least until my utility curve for money flattened even more than simple log, due to the fact that I’m at least in part a satisficer rather than an optimizer on that dimension. Oh, and only if I could trust the randomizer and counterparty to actually pay out, which becomes impossible in the real world pretty quickly.
But that only shows that other factors in the calculation interfere at extreme values, not that the underlying optimization (maximize utility, and convert resources to utility according to your goals/preferences/beliefs) is wrong.