This argument seems to point at some extremely important considerations in the vicinity of “we should act according to how we want civilizations similar to us to act” (rather than just focusing on causally influencing our future light cone), etc.
The details of the distribution over possible worlds that you use here seem to matter a lot. How robust are the “robust worlds”? If they are maximally robust (i.e. things turn out great with probability 1 no matter what the civilization does) then we should assign zero weight to the prospect of being in a “robust world”, and place all our chips on being in a “fragile world”.
Contrarily, if the distribution over possible worlds assigns sufficient probability to worlds in which there is a single very risky thing that cuts EV down by either 10% or 90% depending on whether the civilization takes it seriously or not, then perhaps such worlds should dominate our decision making.
This argument seems to point at some extremely important considerations in the vicinity of “we should act according to how we want civilizations similar to us to act” (rather than just focusing on causally influencing our future light cone), etc.
The details of the distribution over possible worlds that you use here seem to matter a lot. How robust are the “robust worlds”? If they are maximally robust (i.e. things turn out great with probability 1 no matter what the civilization does) then we should assign zero weight to the prospect of being in a “robust world”, and place all our chips on being in a “fragile world”.
Contrarily, if the distribution over possible worlds assigns sufficient probability to worlds in which there is a single very risky thing that cuts EV down by either 10% or 90% depending on whether the civilization takes it seriously or not, then perhaps such worlds should dominate our decision making.