The question I’m currently pondering is do we have any other choice? As far as I see, we have three options to deal with AGI risks:
A: Ensure that no AGI is ever built. How far are we willing to go to achieve this outcome? Can anything short of burning all GPUs accomplish this? Is that even enough or do we need to burn all CPUs in addition to that and go back to a pre-digital age? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an AGI anyway.
B: Ensure that there is no AI apocalypse, even if a misaligned AGI is built. Is that even possible?
C: Ensure that every AGI created is aligned. Can we somehow ensure that there is no accident with misaligned AGIs? What about bad actors that build a misaligned AGI on purpose?
D: What I describe in this post—actively build one aligned AGI that controls all online devices and eradicates all other AGIs. For that purpose, the aligned AGI would need to at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that. And surely, some human decision-makers will turn their duties over to the AGI. Eventually, all or most decision-making will be either AGI-guided or fully automated, since it’s more efficient.
The question I’m currently pondering is do we have any other choice? As far as I see, we have three options to deal with AGI risks:
A: Ensure that no AGI is ever built. How far are we willing to go to achieve this outcome? Can anything short of burning all GPUs accomplish this? Is that even enough or do we need to burn all CPUs in addition to that and go back to a pre-digital age? Regulation on AI research can help us gain some valuable time, but not everyone adheres to regulation, so eventually somebody will build an AGI anyway.
B: Ensure that there is no AI apocalypse, even if a misaligned AGI is built. Is that even possible?
C: Ensure that every AGI created is aligned. Can we somehow ensure that there is no accident with misaligned AGIs? What about bad actors that build a misaligned AGI on purpose?
D: What I describe in this post—actively build one aligned AGI that controls all online devices and eradicates all other AGIs. For that purpose, the aligned AGI would need to at least control 51% of the world’s total computing power. While that doesn’t necessarily mean total control, we’d already give away a lot of autonomy by just doing that. And surely, some human decision-makers will turn their duties over to the AGI. Eventually, all or most decision-making will be either AGI-guided or fully automated, since it’s more efficient.
Am I overlooking something?