So if I am understanding correctly, SIA puts more weight on universes with many civilizations, which lowers our estimate of survival probability q. This is true regardless of how many expanding civs. we actually observe.
The latter point was surprising to me, but on reflection, perhaps each observation of an expanding civ also increases the estimated number of civilizations. That would mean that there are two effects of observing an expanding civ: 1) Increased the feasibility of passing a late filter 2) increasing the expected number of civilizations that didn’t pass the filter. These effects might cancel out leaving one with no net update.
So I was wrong in saying that Grabby Aliens should reduce our x-risk estimates. This is interesting because in a simple discounting model, higher baseline risk lowers the value of trying to mitigate existential risks:
This implies that a person hoping to encourage work on existential risks may want to convince others that it’s feasible to expand into space.
I wonder about different approaches to SIA. For example, could a different version of SIA be used to argue that we are likely the ancestors of a large civilization? Would this up-weight the chances of cosmic expansion?
Interesting!
So if I am understanding correctly, SIA puts more weight on universes with many civilizations, which lowers our estimate of survival probability q. This is true regardless of how many expanding civs. we actually observe.
The latter point was surprising to me, but on reflection, perhaps each observation of an expanding civ also increases the estimated number of civilizations. That would mean that there are two effects of observing an expanding civ: 1) Increased the feasibility of passing a late filter 2) increasing the expected number of civilizations that didn’t pass the filter. These effects might cancel out leaving one with no net update.
So I was wrong in saying that Grabby Aliens should reduce our x-risk estimates. This is interesting because in a simple discounting model, higher baseline risk lowers the value of trying to mitigate existential risks:
https://www.lesswrong.com/posts/JWMR7yg7fg3abpz6k/a-formula-for-the-value-of-existential-risk-reduction-1
This implies that a person hoping to encourage work on existential risks may want to convince others that it’s feasible to expand into space.
I wonder about different approaches to SIA. For example, could a different version of SIA be used to argue that we are likely the ancestors of a large civilization? Would this up-weight the chances of cosmic expansion?