If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals.
My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).
If you have only the first type of alignment, under current economic incentives and structure, you almost 100% end up with some kind of other disempowerment and something likely more akin to “Wireheading by Infinite Jest”. Augmenting human intelligence would NOT be our first, second, or hundredth choice under current civilizational conditions and comes with a lot of problems and risks and also it’s far from guaranteed to solve the problem (if it’s solvable at all). You can’t realistically augment human intelligence in ways that keep up with the speed at which ASI can improve, and you can’t expect that after creating ASI somewhere there is where we Just Stop. Either we stop before, or we go all the way.
“Under current economic incentives and structure” we can have only “no alignment”. I was talking about rosy hypotheticals. My point was “either we are dead or we are sane enough to stop, find another way and solve problem fully”. Your scenario is not inside the set of realistic outcomes.
If we want to go by realistic outcomes, we’re either lucky in that somehow AGI isn’t straightforward or powerful enough for a fast takeoff (e.g. we get early warning shots like a fumbled attempt at a take-over, or simply we get a new unexpected AI winter), or we’re dead. If we want to talk about scenarios in which things go otherwise then I’m not sure what’s more unlikely between the fully aligned ASI or the only not-kill-everyone aligned one that however we still manage to reign in and eventually align (never mind the idea of human intelligence enhancement, which even putting aside economic incentives would IMO be morally and philosophically repugnant to a lot of people as a matter of principle, and ok in principle but repugnant in practice due to the ethics of the required experiments to most of the rest).