Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?
Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?