Great report. I found the high decision-worthiness vignette especially interesting.
I haven’t read it closely yet, so people should feel free to be like “just read the report more closely and the answers are in there”, but here are some confusions and questions that have been on my mind when trying to understand these things:
Has anyone thought about this in terms of a “consequence indication assumption” that’s like the self-indication assumption but normalizes by the probability of producing paths from selves to cared-about consequences instead of the probability of producing selves? Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)
SIA and SSA mean something different now than when Bostrom originally defined them, right? Modern SIA is Bostrom’s SIA+SSA and modern SSA is Bostrom’s (not SIA)+SSA? Joe Carlsmith talked about this, but it would be good if there were a short comment somewhere that just explained the change of definition, so people can link it whenever it comes up in the future. (edit: ah, just noticed footnote 13)
SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers? A strong great filter that lies in our future seems like it would require enough revisions to our world model to make SIA doom basically a variant of the simulation argument, i.e. the best explanation of our ability to colonize the stars not being real would be the stars themselves not being real. Many other weird hypotheses seem like they’d become more likely than the naive world view under SIA doom reasoning. E.g., maybe there are 10^50 human civilizations on Earth, but they’re all out of phase and can’t affect each other, but they can still see the same sun and stars. Anyway, I guess this problem doesn’t turn up in the “high decision-worthiness” or “consequence indication assumption” formulation.
Great report. I found the high decision-worthiness vignette especially interesting.
Thanks! Glad to hear it
Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)
Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out.
SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers?
Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).
Great report. I found the high decision-worthiness vignette especially interesting.
I haven’t read it closely yet, so people should feel free to be like “just read the report more closely and the answers are in there”, but here are some confusions and questions that have been on my mind when trying to understand these things:
Has anyone thought about this in terms of a “consequence indication assumption” that’s like the self-indication assumption but normalizes by the probability of producing paths from selves to cared-about consequences instead of the probability of producing selves? Maybe this is discussed in the anthropic decision theory sequence and I should just catch up on that?
I wonder how uncertainty about the cosmological future would affect grabby aliens conclusions. In particular, I think not very long ago it was thought plausible that the affectable universe is unbounded, in which case there could be worlds where aliens were almost arbitrarily rare that still had high decision-worthiness. (Faster than light travel seems like it would have similar implications.)
SIA and SSA mean something different now than when Bostrom originally defined them, right? Modern SIA is Bostrom’s SIA+SSA and modern SSA is Bostrom’s (not SIA)+SSA? Joe Carlsmith talked about this, but it would be good if there were a short comment somewhere that just explained the change of definition, so people can link it whenever it comes up in the future. (edit: ah, just noticed footnote 13)
SIA doomsday is a very different thing than the regular doomsday argument, despite the name, right? The former is about being unlikely to colonize the universe, the latter is about being unlikely to have a high number of observers? A strong great filter that lies in our future seems like it would require enough revisions to our world model to make SIA doom basically a variant of the simulation argument, i.e. the best explanation of our ability to colonize the stars not being real would be the stars themselves not being real. Many other weird hypotheses seem like they’d become more likely than the naive world view under SIA doom reasoning. E.g., maybe there are 10^50 human civilizations on Earth, but they’re all out of phase and can’t affect each other, but they can still see the same sun and stars. Anyway, I guess this problem doesn’t turn up in the “high decision-worthiness” or “consequence indication assumption” formulation.
Thanks! Glad to hear it
Yep, this is kinda what anthropic decision theory (ADT) is designed to be :-D ADT + total utilitarianism often gives similar answers to SIA.
Yeah, this is a great point. Toby Ord mentions here the potential for dark energy to be harnessed here, which would lead to a similar conclusion. Things like this may be Pascal’s muggings (i.e., we wager our decisions on being in a world where our decisions matter infinitely). Since our decisions might already matter ‘infinitely’ (evidential-like decision theory plus an infinite world) I’m not sure how this pans out.
Exactly. SSA (with a sufficiently large reference class) always predicts Doom as a consequence of its structure, but SIA doomsday is contingent on the case we happen to be in (colonisers, as you mention).