“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals.
My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals. My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).