It seems to me that any reasoning, be it with bounded or unbounded utility will support avoiding unlikely civilizational threats at the expense of small number of lives for sufficiently large civilizations. I don’t see anything wrong with that (in particular I don’t think it leads to mass murder since that would have a significant utility cost).
There is a different related problem, namely that if the utility function saturates around (say) 10^10 people and our civilization has 10^20 people, then the death of everyone except some 10^15 people will be acceptable to prevent a an event killing everyone except some 10^8 people with much lower probability. However, this effect disappears once we sum over all possible universes weighted by the Solomonoff measure as we should (like done here). Effectively it normalizes the utility function to saturate at the actual capacity of the multiverse.
It seems to me that any reasoning, be it with bounded or unbounded utility will support avoiding unlikely civilizational threats at the expense of small number of lives for sufficiently large civilizations. I don’t see anything wrong with that (in particular I don’t think it leads to mass murder since that would have a significant utility cost).
There is a different related problem, namely that if the utility function saturates around (say) 10^10 people and our civilization has 10^20 people, then the death of everyone except some 10^15 people will be acceptable to prevent a an event killing everyone except some 10^8 people with much lower probability. However, this effect disappears once we sum over all possible universes weighted by the Solomonoff measure as we should (like done here). Effectively it normalizes the utility function to saturate at the actual capacity of the multiverse.