It’s worth estimating when existential risks are most likely to occur, as knowing this will influence planning. E.g. If existential risks are more likely to occur in the far future, it would probably be best to try to invest in capital now and donate later, but if they are more likely to occur in the near future, it would probably be best to donate now.
So, what’s everyone’s best estimates on when existential catastrophes are most likely to occur?
Within the next 500 to 1000 years. After that point we will almost certainly have spread out far enough that any obvious Great Filters aspects would if they were the main cause of the Filter likely be observable astronomically.
I suppose existential risk will be highest in the next 30-100 years, as I this is the most probable period for AGI to come into existence, and after 100 years or so, there will probably be at least a few space colonies (There are even two companies currently planning to mine asteroids).
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?
It’s worth estimating when existential risks are most likely to occur, as knowing this will influence planning. E.g. If existential risks are more likely to occur in the far future, it would probably be best to try to invest in capital now and donate later, but if they are more likely to occur in the near future, it would probably be best to donate now.
So, what’s everyone’s best estimates on when existential catastrophes are most likely to occur?
Within the next 500 to 1000 years. After that point we will almost certainly have spread out far enough that any obvious Great Filters aspects would if they were the main cause of the Filter likely be observable astronomically.
I suppose existential risk will be highest in the next 30-100 years, as I this is the most probable period for AGI to come into existence, and after 100 years or so, there will probably be at least a few space colonies (There are even two companies currently planning to mine asteroids).
Does not work. AGI is unlikely to be the Great Filter since expanding at less than light speed would be visible to us and expanding at close to light speed is unlikely. Note that if AGI is a serious existential threat then space colonies will not be sufficient to stop it. Colonization works well for nuclear war, nanotech problems, epidemics, some astronomical threats, but not artificial intelligence.
Good point about AGI probably not being the Great Filter. I didn’t mean space colonization would prevent existential risks from AI though, just general threats.
So, we’ve established that existential risks (ignoring heat death, if it counts as one) will very probably occur within 1000 years, but can we get more specific?