I suppose existential risk will be highest in the next 30-100 years, as I this is the most probable period for AGI to come into existence, and after 100 years or so, there will probably be at least a few space colonies (There are even two companies currently planning to mine asteroids).
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?
I suppose existential risk will be highest in the next 30-100 years, as I this is the most probable period for AGI to come into existence, and after 100 years or so, there will probably be at least a few space colonies (There are even two companies currently planning to mine asteroids).
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?