Dictators gain their power by leverage over human agents. A dictator that kills all other humans has no power, and then lives the remainder of their shortened life in squalor. A superintelligent AI that merely has the power of a human dictator for eternity and relies on humans to do 99% of what it wants is probably in the best few percent of outcomes from powerful AI. Succeeding in limiting it to that would be an enormous success in AGI safety even if it wasn’t the best possible success.
This is probably another example of the over-broadness of the term “AGI safety”, where one person can use it to mean mostly “we get lots of good things and few bad things” and another to mean mostly “AGI doesn’t literally kill everyone and everything”.
I’m not sure that the implication holds.
Dictators gain their power by leverage over human agents. A dictator that kills all other humans has no power, and then lives the remainder of their shortened life in squalor. A superintelligent AI that merely has the power of a human dictator for eternity and relies on humans to do 99% of what it wants is probably in the best few percent of outcomes from powerful AI. Succeeding in limiting it to that would be an enormous success in AGI safety even if it wasn’t the best possible success.
This is probably another example of the over-broadness of the term “AGI safety”, where one person can use it to mean mostly “we get lots of good things and few bad things” and another to mean mostly “AGI doesn’t literally kill everyone and everything”.