That’s only clear if you define “long enough” in a perverse way. For any finite sequence of bets, this is positive value. Read SBF’s response more closely—maybe you have an ENORMOUSLY valuable existence.
I agree that it’s positive expected value calculated as the arithmetic mean. Even so, I think most humans would be reluctant to play the game even a single time.
tl;dr: it depends on whether utility is linear or sublinear in aggregation. Either way, you have to accept some odd conclusions.
I agree it’s mostly a question of “what is utility”. This post is more about building a utility function which follows most human behavior and showing how if you model utility in a linear unbounded way, you have to accept some weird conclusions.
The main conflict is between measuring utility as some cosmic value that is impartial to you personally, and a desire to prioritize your own life over cosmic utility. Thought experiments like pascal’s mugging force this into the light.
Personally, I bite the bullet and claim that human/sentient lives decline in marginal value. This is contrary to what most utilitarians claim, and I do recognize that it implies I prefer fewer lives over more in many cases. I additionally give some value to variety of lived experience, so a pure duplicate is less utils in my calculations than a variant.
I don’t think this fully “protects” you. In the post I constructed a game which maximizes log utility and still leaves you with nothing in 99% of cases. This is why I also truncate low probabilities and bound the utility function. What do you think?
But that doesn’t seem to be what you’re proposing. You’re truncating at low probabilities, but without much justification. And you’re mixing in risk-aversion as if it were a real thing, rather than a bias/heuristic that humans use when things are hard to calculate or monitor (for instance, any real decision has to account for the likelihood that your payout matrix is wrong, and you won’t actually receive the value you’re counting on).
My main justification is that you need to do it if you want your function to model common human behavior. I should have made that more clear.
Probably, but precision matters. Mixing up mean vs sum when talking about different quantities of lives is confusing. We do agree that it’s all about how to convert to utilities. I’m not sure we agree on whether 2x the number of equal-value lives is 2x the utility. I say no, many Utilitarians say yes (one of the reasons I don’t consider myself Utilitarian).
game which maximizes log utility and still leaves you with nothing in 99% of cases.
Again, precision in description matters—that game maximizes log wealth, presumed to be close to linear utility. And it’s not clear that it shows what you think—it never leaves you nothing, just very often a small fraction of your current wealth, and sometimes astronomical wealth. I think I’d play that game quite a bit, at least until my utility curve for money flattened even more than simple log, due to the fact that I’m at least in part a satisficer rather than an optimizer on that dimension. Oh, and only if I could trust the randomizer and counterparty to actually pay out, which becomes impossible in the real world pretty quickly.
But that only shows that other factors in the calculation interfere at extreme values, not that the underlying optimization (maximize utility, and convert resources to utility according to your goals/preferences/beliefs) is wrong.
I think we mostly agree.
I agree that it’s positive expected value calculated as the arithmetic mean. Even so, I think most humans would be reluctant to play the game even a single time.
I agree it’s mostly a question of “what is utility”. This post is more about building a utility function which follows most human behavior and showing how if you model utility in a linear unbounded way, you have to accept some weird conclusions.
The main conflict is between measuring utility as some cosmic value that is impartial to you personally, and a desire to prioritize your own life over cosmic utility. Thought experiments like pascal’s mugging force this into the light.
I don’t think this fully “protects” you. In the post I constructed a game which maximizes log utility and still leaves you with nothing in 99% of cases. This is why I also truncate low probabilities and bound the utility function. What do you think?
My main justification is that you need to do it if you want your function to model common human behavior. I should have made that more clear.
Probably, but precision matters. Mixing up mean vs sum when talking about different quantities of lives is confusing. We do agree that it’s all about how to convert to utilities. I’m not sure we agree on whether 2x the number of equal-value lives is 2x the utility. I say no, many Utilitarians say yes (one of the reasons I don’t consider myself Utilitarian).
Again, precision in description matters—that game maximizes log wealth, presumed to be close to linear utility. And it’s not clear that it shows what you think—it never leaves you nothing, just very often a small fraction of your current wealth, and sometimes astronomical wealth. I think I’d play that game quite a bit, at least until my utility curve for money flattened even more than simple log, due to the fact that I’m at least in part a satisficer rather than an optimizer on that dimension. Oh, and only if I could trust the randomizer and counterparty to actually pay out, which becomes impossible in the real world pretty quickly.
But that only shows that other factors in the calculation interfere at extreme values, not that the underlying optimization (maximize utility, and convert resources to utility according to your goals/preferences/beliefs) is wrong.