Alignment researchers usually don’t think of their work as a means to control AGI. They should.
We usually think of alignment as a means to create a benevolent superintelligence. But just about any workable technique for creating a value-aligned AGI will work even better for creating an intent aligned AGI that follows instructions. Keeping a human in the loop and in charge bypasses several of the most severe Lethalities by effectively adding corrigibility. What human in control of a major AGI project would take an extra risk to benefit all of humanity instead of ensure that AGI will follow their values by following their instructions?
That sets the stage for even more power-hungry humans to seize control of projects and AGIs with the potential for superintelligence. I fully agree that there’s a scary first-mover advantage benefitting the most vicious actors in a multipolar human-controlled AGI scenario; see If we solve alignment, do we die anyway?.
The result is a permanent dictatorship. Will the dictator slowly get more benevelent once they have absolute power? The pursuit of power seems to corrupt more than having secure power, so maybe—but I would not want to bet on it.
However, I’m not so sure about hiding alignment techniques. I think the alternative to human-controllable AGI isn’t really slower progress, it’s uncontrollable AGI- which will pursue its own weird ends and wipe out humanity in the process, for the reasons classical alignment thinking descibes.
Great post and great points.
Alignment researchers usually don’t think of their work as a means to control AGI. They should.
We usually think of alignment as a means to create a benevolent superintelligence. But just about any workable technique for creating a value-aligned AGI will work even better for creating an intent aligned AGI that follows instructions. Keeping a human in the loop and in charge bypasses several of the most severe Lethalities by effectively adding corrigibility. What human in control of a major AGI project would take an extra risk to benefit all of humanity instead of ensure that AGI will follow their values by following their instructions?
That sets the stage for even more power-hungry humans to seize control of projects and AGIs with the potential for superintelligence. I fully agree that there’s a scary first-mover advantage benefitting the most vicious actors in a multipolar human-controlled AGI scenario; see If we solve alignment, do we die anyway?.
The result is a permanent dictatorship. Will the dictator slowly get more benevelent once they have absolute power? The pursuit of power seems to corrupt more than having secure power, so maybe—but I would not want to bet on it.
However, I’m not so sure about hiding alignment techniques. I think the alternative to human-controllable AGI isn’t really slower progress, it’s uncontrollable AGI- which will pursue its own weird ends and wipe out humanity in the process, for the reasons classical alignment thinking descibes.