2) Just to amplify point 1) a bit: you shouldn’t always maximize expected utility if you only live once. Expected values — in other words, averages — are very important when you make the same small bet over and over again. When the stakes get higher and you aren’t in a position to repeat the bet over and over, it may be wise to be risk averse.
You need to be a bit careful with your language here. Utility is by definition the thing whose expected value you are maximizing (which probably doesn’t exist for humans). Your observation correctly shows that we should care about expected lives saved if the probabilities in question are large enough that we should expect the actual number of lives saved to be close to the expected number. And this is an argument for why utility scales linearly in number of lives on small scales, and why it does not on large scales.
So you reached the right conclusion here for the right reasons, but using slightly incorrect language (which is pretty understandable given how perversely the word utility often gets conflated on this site). You may want to edit your post though, to avoid triggering the reflex where people ignore you because you got a definition wrong.
Also, the answer to Pascal’s mugging is that your utility function is bounded. This has been discussed before; while different people have offered different solutions, this is the one that feels right to me on a gut level. It is also the only solution that allows you to uniformly ignore small probabilities without making your utility function depend on your beliefs.
You need to be a bit careful with your language here. Utility is by definition the thing whose expected value you are maximizing (which probably doesn’t exist for humans). Your observation correctly shows that we should care about expected lives saved if the probabilities in question are large enough that we should expect the actual number of lives saved to be close to the expected number. And this is an argument for why utility scales linearly in number of lives on small scales, and why it does not on large scales.
So you reached the right conclusion here for the right reasons, but using slightly incorrect language (which is pretty understandable given how perversely the word utility often gets conflated on this site). You may want to edit your post though, to avoid triggering the reflex where people ignore you because you got a definition wrong.
Also, the answer to Pascal’s mugging is that your utility function is bounded. This has been discussed before; while different people have offered different solutions, this is the one that feels right to me on a gut level. It is also the only solution that allows you to uniformly ignore small probabilities without making your utility function depend on your beliefs.