I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.
I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.