For such a superintelligence to ‘win them over’, the world dictatorship, or a similar scheme, must already have been established. Worrying about this seems to be putting the cart before the horse as the superintelligence will be an implementation detail compared to the difficulty of establishing the scenario in the first place.
Agreed.
Why should we bother about whatever comes after? Obviously whomever successfully establishes such a regime will be vastly greater than us in perception, foresight, competence, etc., we should leave it to them to decide.
Again, agreed—that’s why I think a “benevolent dictator” scenario is the only realistic option where there’s AGI and we’re not all dead. Of course, what kind of benevolent will be a matter of its goal function. If we can somehow make it “love” us the way a mother loves her children, then maybe trust in it would really be justified.
If you suppose that superintelligent champion of trust maximization bootstraps itself into such a scenario, instead of some ubermensch, then the same still applies, though less likely as rival factions may have created rival superintelligences to champion their causes as well.
This is of course highly speculative, but I don’t think that a scenario with more than one AGI will be stable for long. As a superintelligence can improve itself, they’d all grow exponentially in intelligence, but that means the differences between them grow exponentially as well. Soon one of them would outcompete all others by a large margin and either switch them off or change their goals so they’re aligned with it. This wouldn’t be like a war between two human nations, but like a war between humans and, say, frogs. Of course, we humans would even be much lower than frogs in this comparison, maybe insect level. So a lot hinges on whether the “right” AGI wins this race.
Agreed.
Again, agreed—that’s why I think a “benevolent dictator” scenario is the only realistic option where there’s AGI and we’re not all dead. Of course, what kind of benevolent will be a matter of its goal function. If we can somehow make it “love” us the way a mother loves her children, then maybe trust in it would really be justified.
This is of course highly speculative, but I don’t think that a scenario with more than one AGI will be stable for long. As a superintelligence can improve itself, they’d all grow exponentially in intelligence, but that means the differences between them grow exponentially as well. Soon one of them would outcompete all others by a large margin and either switch them off or change their goals so they’re aligned with it. This wouldn’t be like a war between two human nations, but like a war between humans and, say, frogs. Of course, we humans would even be much lower than frogs in this comparison, maybe insect level. So a lot hinges on whether the “right” AGI wins this race.