I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn’t look much brighter than that—let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can’t see why we would prefer the latter to (say) a civilization emerging on Alpha Centauri—so per the principle of charity I’ll just pretend that instead of “Earth-originating intelligent life” he had said “descendants of present-day humans”.
Early Singularity. Everyone currently living is saved.
Late Singularity. Nearly everyone currently living dies anyway.
Very late Singularity, or “Semi-crush”. everyone currently living dies, and most of our yet to be born descendants (up to the second renaissance) will die as well. There is a point however were everyone is saved.
Crush. Everyone will die, now and for ever. Plus, humanity dies with our sun.
If you most value those currently living, that’s right, it doesn’t make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn’t such a disaster.
Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.
On what timescale?
I find the focus on x-risks as defined by Bostrom (those from which Earth-originating intelligent life will never, ever recover) way too narrow. A situation in which 99% of humanity dies and the rest reverts to hunting and gathering for a few millennia before recovering wouldn’t look much brighter than that—let alone one in which humanity goes extinct but in (say) a hundred million years the descendants of (say) elephants create a new civilization. In particular, I can’t see why we would prefer the latter to (say) a civilization emerging on Alpha Centauri—so per the principle of charity I’ll just pretend that instead of “Earth-originating intelligent life” he had said “descendants of present-day humans”.
It depends on what you value. I see 3 situations:
Early Singularity. Everyone currently living is saved.
Late Singularity. Nearly everyone currently living dies anyway.
Very late Singularity, or “Semi-crush”. everyone currently living dies, and most of our yet to be born descendants (up to the second renaissance) will die as well. There is a point however were everyone is saved.
Crush. Everyone will die, now and for ever. Plus, humanity dies with our sun.
If you most value those currently living, that’s right, it doesn’t make much difference. But if you care about the future of humanity itself, a Very Late Singularity isn’t such a disaster.
Now that I think about it, I care both about those currently living and about humanity itself, but with a small but non-zero discount rate (of the order of the reciprocal of the time humanity has existed so far). Also, I value humanity not only genetically but also memetically, so having people with human genome but Palaeolithic technocultural level surviving would be only slightly better for me than no-one surviving at all.