I think a good measure of human civilization’s size is world GDP. Carl Shulman has already linked to Hanson. If we look at Brad DeLong’s trend of GDP produced by humanity, the future must be “new” in that this can’t continue super-exponential growth another few centuries. It seems physically impossible for humans to double their economy say every month. What happens when this trend stops? I don’t know, but maybe
A third, even faster optimization process starts after the first two of biological evolution and human coordination
This fails to clearly happen, but beings that are indeed “subjectively different” in deserving a higher moral status still develop
Comparatively slow human expansion into space
Long-term stasis
End of intelligent life
A different question is whether you can take effective, tailored actions like MIRI attempts in response to one of these possibilities, without any causal model of how it would arise. (In other words, with global warming we can model the climate system at least somewhat.) I currently doubt it because it seems that with such limited knowledge, any actions are likely to be either totally ineffective, or effective against bad outcomes but also good outcomes—so we could renounce fast computers to defend against AI but that might starkly limit humanity’s potential. We may be playing blackjack but don’t yet know if the maximum hand is 21 or 2100.
I think a good measure of human civilization’s size is world GDP. Carl Shulman has already linked to Hanson. If we look at Brad DeLong’s trend of GDP produced by humanity, the future must be “new” in that this can’t continue super-exponential growth another few centuries. It seems physically impossible for humans to double their economy say every month. What happens when this trend stops? I don’t know, but maybe
A third, even faster optimization process starts after the first two of biological evolution and human coordination
This fails to clearly happen, but beings that are indeed “subjectively different” in deserving a higher moral status still develop
Comparatively slow human expansion into space
Long-term stasis
End of intelligent life
A different question is whether you can take effective, tailored actions like MIRI attempts in response to one of these possibilities, without any causal model of how it would arise. (In other words, with global warming we can model the climate system at least somewhat.) I currently doubt it because it seems that with such limited knowledge, any actions are likely to be either totally ineffective, or effective against bad outcomes but also good outcomes—so we could renounce fast computers to defend against AI but that might starkly limit humanity’s potential. We may be playing blackjack but don’t yet know if the maximum hand is 21 or 2100.