We will hopefully be fine either way, but I think I would like the AI before some radical biotech revolution. If you think about it, if you first get some sort of super-advanced synthetic biology, that might kill us. But if we’re lucky, we survive it. Then, maybe you invent some super-advanced molecular nanotechnology, that might kill us, but if we’re lucky we survive that. And then you do the AI. Then, maybe that will kill us, or if we’re lucky we survive that and then we get to utopia.
Well, then you have to get through sort of three separate existential risks—first the biotech risks, plus the nanotech risks, plus the AI risks, whereas if we get AI first, maybe that will kill us, but if not, we get through that, then I think that will handle the biotech and nanotech risks, and so the total amount of existential risk on that second trajectory would sort of be less than on the former.
I see the optimal trajectory as us going through pretty much ANY other “radical” revolutions before AI, with maybe the exception of uploading or radical human enhancement. All the ‘radical revolutions’ I can imagine aren’t the phase shift of AGI. These seem like more akin to “amped up” versions of revolutions we’ve already gone through and so in some sense more “similar” and “safer” than what AGI would do. Thus I think these are better practice for us as a society...
On a different note, being overcautious vs undercautious is super easy. We REALLY want to overshoot, than undershoot. If we overshoot, we have a thousand years to correct that. If we undershoot and fail at alignment, we all die and there’s no correcting that… We have seen so many social shifts over the last 100 years, there’s little reason to believe we’d be ‘stuck’ without AGI forever. It’s not a zero chance, but it certainly seems way lower than AGI being unaligned.
I see the optimal trajectory as us going through pretty much ANY other “radical” revolutions before AI, with maybe the exception of uploading or radical human enhancement. All the ‘radical revolutions’ I can imagine aren’t the phase shift of AGI. These seem like more akin to “amped up” versions of revolutions we’ve already gone through and so in some sense more “similar” and “safer” than what AGI would do. Thus I think these are better practice for us as a society...
On a different note, being overcautious vs undercautious is super easy. We REALLY want to overshoot, than undershoot. If we overshoot, we have a thousand years to correct that. If we undershoot and fail at alignment, we all die and there’s no correcting that… We have seen so many social shifts over the last 100 years, there’s little reason to believe we’d be ‘stuck’ without AGI forever. It’s not a zero chance, but it certainly seems way lower than AGI being unaligned.