As someone who focuses on concerns like human unemployment, I have a few reasons:
I expect AI alignment and control to be solved by default, enough so that I can use it as a premise when thinking about future AIs.
I expect the political problems like mass human unemployment to plausibly be a bit tricky to solve. IMO, the sooner aligned superhuman intelligence is in the government, the better we can make our politics.
I expect aligned AIs and humans to go out into the stars reasonably soon such that not almost all of the future control is lost, and depending on the physics involved, even a single star or galaxy might be enough to let us control our future entirely.
Conditional on at least 1 aligned, superhumanly intelligent AI, I expect existential risk to drop fairly dramatically, and in particular I think the vulnerabilities that would make us vulnerable to rogue ASI can be fixed by aligned ASI.
I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.
As someone who focuses on concerns like human unemployment, I have a few reasons:
I expect AI alignment and control to be solved by default, enough so that I can use it as a premise when thinking about future AIs.
I expect the political problems like mass human unemployment to plausibly be a bit tricky to solve. IMO, the sooner aligned superhuman intelligence is in the government, the better we can make our politics.
I expect aligned AIs and humans to go out into the stars reasonably soon such that not almost all of the future control is lost, and depending on the physics involved, even a single star or galaxy might be enough to let us control our future entirely.
Conditional on at least 1 aligned, superhumanly intelligent AI, I expect existential risk to drop fairly dramatically, and in particular I think the vulnerabilities that would make us vulnerable to rogue ASI can be fixed by aligned ASI.
I agree that aligned ASI fixes a lot of the vulnerabilities. I’m trying to focus on how humanity can survive the dangerous time between now and then. In particular, I think the danger peaks right before going away. The period where AI as a tool and/or independent agent gets stronger and stronger, but the world is not yet under the guardianship of an aligned ASI. That’s the bottleneck we need to navigate.