I think bringing in logical and indexical dignity may be burying the lede here.
I think the core of idea here is something like:
If your moral theory assigns a utility that’s nonconvex (concave) in the number of existing worlds, you’d weakly prefer (strongly prefer) to take risks that are decorrelated across worlds.
(Most moral theories assign utilities that are nonconvex, and many assign utilities that are concave in the number of actual worlds.) The way in which risks may be decorrelated across worlds doesn’t have to be that some are logical and some are indexical.
Hmmm, the moral uncertainty here is actually very interesting to think about.
as a concrete example, if your two best strategies to save the world are:
one whose crux is a theorem being true, which you expect is about 70% likely to be true
one whose crux is a person figuring out a required clever idea, which you expect is about 70% likely to happen
So taking that at face value, there are two separate options.
In one of them there’s a 30% chance you’re dooming ALL worlds to failure, and a 70% chance that ALL worlds have success. It’s more totalistic, which as you say means there’s a 30% chance no one survives—but on the other hand, there’s something noble about”if we get this right, we all succeed together”, “if we get this wrong, we all go down together.
In another, you’re dooming 30% of worlds to failure and giving 70% of them success. Sure there’s now the possiblity that some worlds win—but you’re also implicitly saying you’re fine with the collatoral damage of the 30% of worlds that simply get unlucky.
It seems to me one could make the case for either being the “more moral” course of action. One could imagine a trolley problem that mimicked these dynamics and different people choosing different options.
I think bringing in logical and indexical dignity may be burying the lede here.
I think the core of idea here is something like:
(Most moral theories assign utilities that are nonconvex, and many assign utilities that are concave in the number of actual worlds.) The way in which risks may be decorrelated across worlds doesn’t have to be that some are logical and some are indexical.
Hmmm, the moral uncertainty here is actually very interesting to think about.
So taking that at face value, there are two separate options.
In one of them there’s a 30% chance you’re dooming ALL worlds to failure, and a 70% chance that ALL worlds have success. It’s more totalistic, which as you say means there’s a 30% chance no one survives—but on the other hand, there’s something noble about”if we get this right, we all succeed together”, “if we get this wrong, we all go down together.
In another, you’re dooming 30% of worlds to failure and giving 70% of them success. Sure there’s now the possiblity that some worlds win—but you’re also implicitly saying you’re fine with the collatoral damage of the 30% of worlds that simply get unlucky.
It seems to me one could make the case for either being the “more moral” course of action. One could imagine a trolley problem that mimicked these dynamics and different people choosing different options.