: 1: Humanity grows into a vast civilization of 10^100 people living long and happy lives, or 2: a 10% chance that humanity grows into a vast civilization of 10^102 people living long and happy lives, and a 90% chance of going extinct right now. I think almost everyone would pick option 1, and would think it crazy to take a reckless gamble like option 2. But the Linear Utility Hypothesis says that option 2 is much better. Most of the ways people respond to Pascal’s mugger don’t apply to this situation, since the probabilities and ratios of utilities involved here are not at all extreme.
This is not an airtight argument.
Extinction of humanity is not 0 utils, it’s negative utils. Let the utility of human extinction be -X.
If X > 10^101, then a linear utility function would pick option 1.
Linear Expected Utility (LEU) of option 1:
1.0(10^100) = 10^100.
LEU of option 2:
0.9(-X) + 0.1(10^102) = 10^101 + 0.9(-X)
10^101 − 0.9X < 10^100
-0.9X < 10^100 − 10^101
-0.9X < −9(10^100)
X > 10^101.
I place the extinction of humanity pretty highly, as ot curtails any possible future. So X is always at least as high as the utopia. I would not accept any utopia where the P of human extinction was > 0.51, because the negutility of human extinction outweighs utility of utopia for any possible utopia.
Extinction of humanity just means humanity not existing in the future, so the Linear Utility Hypothesis does imply its value is 0. If you make an exception and add a penalty for extinction that is larger than the Linear Utility Hypothesis would dictate, then the Linear Utility Hypothesis applied to other outcomes would still imply that when considering sufficiently large potential future populations, this extinction penalty becomes negligible in comparison.
See this thread. there is no finite number of lives that reach utopia, for which I would accept Omega’s bet at a 90% chance of extinction.
Human extinction now for me is worse than losing 10 trillion people, if the global population was 100 trillion.
My disutility of extinction isn’t just the number of lives lost. It involves the termination of all future potential of humanity, and I’m not sure how to value that, but see the bolded.
I don’t assign a disutility to extinction, and my preference with regards to extinction is probably lexicographic with respect to some other things (see above).
It was specified that the total future population in each scenario was 10^100 and 10^102. These numbers are the future people that couldn’t exist if humanity goes extinct.
This is not an airtight argument.
Extinction of humanity is not 0 utils, it’s negative utils. Let the utility of human extinction be -X.
If X > 10^101, then a linear utility function would pick option 1.
Linear Expected Utility (LEU) of option 1:
1.0(10^100) = 10^100.
LEU of option 2:
0.9(-X) + 0.1(10^102) = 10^101 + 0.9(-X)
10^101 − 0.9X < 10^100
-0.9X < 10^100 − 10^101
-0.9X < −9(10^100)
X > 10^101.
I place the extinction of humanity pretty highly, as ot curtails any possible future. So X is always at least as high as the utopia. I would not accept any utopia where the P of human extinction was > 0.51, because the negutility of human extinction outweighs utility of utopia for any possible utopia.
Extinction of humanity just means humanity not existing in the future, so the Linear Utility Hypothesis does imply its value is 0. If you make an exception and add a penalty for extinction that is larger than the Linear Utility Hypothesis would dictate, then the Linear Utility Hypothesis applied to other outcomes would still imply that when considering sufficiently large potential future populations, this extinction penalty becomes negligible in comparison.
See this thread. there is no finite number of lives that reach utopia, for which I would accept Omega’s bet at a 90% chance of extinction.
Human extinction now for me is worse than losing 10 trillion people, if the global population was 100 trillion.
My disutility of extinction isn’t just the number of lives lost. It involves the termination of all future potential of humanity, and I’m not sure how to value that, but see the bolded.
I don’t assign a disutility to extinction, and my preference with regards to extinction is probably lexicographic with respect to some other things (see above).
That’s a reasonable value judgement, but it’s not what the Linear Utility Hypothesis would predict.
My point is that for me, extinction is not equivalent to losing current amount of lives now.
This is because extinction destroys all potential future utility. It destroys thw potential of humanity.
I’m saying that extinction can’t be evaluated normally, so you need a better example to state your argument against LUH.
Extinction now is worse than losing X people, if the global human population is 10 X, irregardless of how large X is.
That position above is independent of the linear utility hypothesis.
It was specified that the total future population in each scenario was 10^100 and 10^102. These numbers are the future people that couldn’t exist if humanity goes extinct.