I strongly agree that universal, singular, true malevolent AGI doesn’t make for much of a Hollywood movie, primarily due to points 6 and 7.
What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or decentralize power, and they can share goals that are destructive to some humans and benevolent to others. (For their own purposes, and for the purpose of helping humans as a side-project.)
Also, some AGIs might differentiate between “humans worth keeping around” and “humans not worth keeping around.” They may also put their “parents” (creators) in a different category than other humans, and they may also slowly add to that category, or subtract from it, or otherwise alter it.
at the FHI, we disagree whether an ecology of AIs would make good AIs behave bad, or bad ones behave good. The disagreement matches our political opinions on free markets and competition, so it probably not informative.
An interesting question to ask is “how many people who favor markets understand the best arguments against them, and vice versa.” Because we’re dealing with humans here, my suspicion is that if there’s a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.
Then again, we’re supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.
It’s a debate worth having. Also, one need not have competition to have power decentralization. There is a disincentive aspect added to making violence impossible that makes “cooperation” more likely than “antagonistic competition.” (Ie: Some sociopaths choose to cooperate with other strong sociopaths because they can see that competing with them would likely cause their deaths or their impoverishment. However, if you gave any one of those sociopaths clear knowledge that they held absolute power ….the result would be horrible domination.)
Evolution winds up decentralizing power among relative equals, and the resulting “relative peace” (for varying reasons) then allows for _some of the reasons to be “good reasons.” (Ie: Benevolent empaths working together for a better world.) This isn’t to say that everything is rosy under decentralization. Decentralization may work more poorly than an all-powerful benevolent monarch.
It’s just that benevolent monarchs aren’t that likely given who wants to be a monarch, and who tries hardest to win any “monarch” positions that open up.
Such a thing might not be impossible, but if you make a mistake pursuing that course of action, the result tends to be catastrophic, whereas decentralization might be “almost as horrible and bloody,” but at least offers the chance of continued survival, and the chance of survival allows for those who survive to “optimize or improve in the future.”
“There may be no such thing as a utopia, but if there isn’t, then retaining the chance for a utopia is better than definitively ruling one out.” More superintelligences that are partly benevolent may be better than one superintelligence that has the possibility of being benevolent or malevolent.
I strongly agree that universal, singular, true malevolent AGI doesn’t make for much of a Hollywood movie, primarily due to points 6 and 7.
What is far more interesting is an ecology of superintelligences that have conflicting goals, but who have agreed to be governed by enlightenment values. Of course, some may be smart enough (or stupid enough) to try subterfuge, and some may be smarter-than-the-others enough to perform a subterfuge and get away with it. There can be a relative timeline where nearby ultra-intelligent machines compete with each other, or decentralize power, and they can share goals that are destructive to some humans and benevolent to others. (For their own purposes, and for the purpose of helping humans as a side-project.)
Also, some AGIs might differentiate between “humans worth keeping around” and “humans not worth keeping around.” They may also put their “parents” (creators) in a different category than other humans, and they may also slowly add to that category, or subtract from it, or otherwise alter it.
It’s hard to say. I’m not ultra-intelligent.
at the FHI, we disagree whether an ecology of AIs would make good AIs behave bad, or bad ones behave good. The disagreement matches our political opinions on free markets and competition, so it probably not informative.
An interesting question to ask is “how many people who favor markets understand the best arguments against them, and vice versa.” Because we’re dealing with humans here, my suspicion is that if there’s a lot of disagreement it stems largely from unwillingness to consider the other side, and unfamiliarity with the other side. So, in that regard you might be right.
Then again, we’re supposed to be rational, and willing to change our minds if evidence supports that change, and perhaps some of us are actually capable of such a thing.
It’s a debate worth having. Also, one need not have competition to have power decentralization. There is a disincentive aspect added to making violence impossible that makes “cooperation” more likely than “antagonistic competition.” (Ie: Some sociopaths choose to cooperate with other strong sociopaths because they can see that competing with them would likely cause their deaths or their impoverishment. However, if you gave any one of those sociopaths clear knowledge that they held absolute power ….the result would be horrible domination.)
Evolution winds up decentralizing power among relative equals, and the resulting “relative peace” (for varying reasons) then allows for _some of the reasons to be “good reasons.” (Ie: Benevolent empaths working together for a better world.) This isn’t to say that everything is rosy under decentralization. Decentralization may work more poorly than an all-powerful benevolent monarch.
It’s just that benevolent monarchs aren’t that likely given who wants to be a monarch, and who tries hardest to win any “monarch” positions that open up.
Such a thing might not be impossible, but if you make a mistake pursuing that course of action, the result tends to be catastrophic, whereas decentralization might be “almost as horrible and bloody,” but at least offers the chance of continued survival, and the chance of survival allows for those who survive to “optimize or improve in the future.”
“There may be no such thing as a utopia, but if there isn’t, then retaining the chance for a utopia is better than definitively ruling one out.” More superintelligences that are partly benevolent may be better than one superintelligence that has the possibility of being benevolent or malevolent.