Sometimes, solving a more general problem is easier than solving a partial problem (1, 2). If you build a Friendly superhuman AI, I would expect that some time later all dictators will be removed from power… for example by an army of robots that will unseen infiltrate the country, and at the same moment destroy its biggest weapons and arrest the dictator and other important people of the regime.
(What exactly does “solving” the AGI mean: a general theory published in a scientific paper, an exact design that solves all the technical issues, having actually built the machine? Mere theory will not depose dictators.)
Sometimes, solving a more general problem is easier than solving a partial problem (1, 2). If you build a Friendly superhuman AI, I would expect that some time later all dictators will be removed from power… for example by an army of robots that will unseen infiltrate the country, and at the same moment destroy its biggest weapons and arrest the dictator and other important people of the regime.
(What exactly does “solving” the AGI mean: a general theory published in a scientific paper, an exact design that solves all the technical issues, having actually built the machine? Mere theory will not depose dictators.)