I think it’s a sad and powerful Overton window demonstration that these days someone can write a paper like this without even mentioning space colonization, which is the obvious alternate endgame if you want a non-global-dictatorship solution.
Some of Bostrom’s key papers are primarily about the massive importance of colonising space soon, and other researchers at the institution he founded have written papers trying to do basic modelling of plans to ensure we’re able to use all the resources in the universe. It’s inaccurate to say that this isn’t something that these researchers think about a lot and care about.
But I don’t think it affects this paper. There can be technologies that pose such existential threats (e.g. superintelligent AGI) that it doesn’t matter how far away you are when you make them (well, I suppose if we leave each others’ light cones then that’s a bit different, though there are ways to get around that barrier). So I think many of these arguments will go through if you assume we’ve, say, built dyson spheres and shot out into the galaxies.
Nick’s space papers are largely about how to harvest large amounts of utility from the galaxy, not about how to increase humanity’s robustness. And yes, there are some Xrisks (including the one I am focused on) that space colonies do not help with, but the reader may not be convinced of these, so it is surely worth mentioning that some risks would be guarded against with interstellar diversification. If nothing else you should probably argue that space colonization is not an adequate solution for these reasons.
I think it’s a sad and powerful Overton window demonstration that these days someone can write a paper like this without even mentioning space colonization, which is the obvious alternate endgame if you want a non-global-dictatorship solution.
Some of Bostrom’s key papers are primarily about the massive importance of colonising space soon, and other researchers at the institution he founded have written papers trying to do basic modelling of plans to ensure we’re able to use all the resources in the universe. It’s inaccurate to say that this isn’t something that these researchers think about a lot and care about.
But I don’t think it affects this paper. There can be technologies that pose such existential threats (e.g. superintelligent AGI) that it doesn’t matter how far away you are when you make them (well, I suppose if we leave each others’ light cones then that’s a bit different, though there are ways to get around that barrier). So I think many of these arguments will go through if you assume we’ve, say, built dyson spheres and shot out into the galaxies.
Nick’s space papers are largely about how to harvest large amounts of utility from the galaxy, not about how to increase humanity’s robustness. And yes, there are some Xrisks (including the one I am focused on) that space colonies do not help with, but the reader may not be convinced of these, so it is surely worth mentioning that some risks would be guarded against with interstellar diversification. If nothing else you should probably argue that space colonization is not an adequate solution for these reasons.