Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.
I read stuff like this and immediately my mind thinks, “comparative advantage.” The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill. And if it is worthwhile for them to trade with each other, then it may well be in the interest of neither of them to (say) eliminate the other, and it may be a waste of resources to (say) coerce the other. It is worthwhile for the state to coerce the population because the state is few and the population are many, so the per-person cost of coercion falls below the benefit of coercion; it is much less worthwhile for an individual to coerce another (slavery generally has the backing of the state—see for example the fugitive slave laws). But this mass production of coercive fear works in part because humans are similar to each other and so can be dealt with more or less the same way. If AIs are all over the place, then this does not necessarily hold. Furthermore if one AI decides to coerce the humans (who are admittedly similar to each other) then the other AIs may oppose him in order that they themselves might retain direct access to humans.
The AIs might agree that they’d all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.
Maybe but maybe not. Dividing the spoils paints a picture of the one-time destruction of the human race, and it may well be to the advantage of the AIs not to kill off the humans. After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
You definitely don’t want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this kind of thing from happening, even in a strongly self-modifying mind (which humans aren’t), is one of the sub-problems of the FAI problem.
The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked. This has seemed dubious ever since Asimov. The idea of baking in rules of robotics has long seemed to me to fundamentally misunderstand both the nature of morality and the nature of intelligence. But time will tell.
Any two AIs are likely to have a much vaster difference in effective intelligence than you could ever find between two humans (for one thing, their hardware might be much more different than any two working human brains). This likelihood increases further if (at least) some subset of them is capable of strong self-improvement. With enough difference in power, cooperation becomes a losing strategy for the more powerful party.
I read stuff like this and immediately my mind thinks, “comparative advantage.” The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill. And if it is worthwhile for them to trade with each other, then it may well be in the interest of neither of them to (say) eliminate the other, and it may be a waste of resources to (say) coerce the other. It is worthwhile for the state to coerce the population because the state is few and the population are many, so the per-person cost of coercion falls below the benefit of coercion; it is much less worthwhile for an individual to coerce another (slavery generally has the backing of the state—see for example the fugitive slave laws). But this mass production of coercive fear works in part because humans are similar to each other and so can be dealt with more or less the same way. If AIs are all over the place, then this does not necessarily hold. Furthermore if one AI decides to coerce the humans (who are admittedly similar to each other) then the other AIs may oppose him in order that they themselves might retain direct access to humans.
The AIs might agree that they’d all be better off if they took the matter currently in use by humans for themselves, dividing the spoils among each other.
Maybe but maybe not. Dividing the spoils paints a picture of the one-time destruction of the human race, and it may well be to the advantage of the AIs not to kill off the humans. After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
You definitely don’t want an FAI to unpredictably change its terminal values. Figuring out how to reliably prevent this kind of thing from happening, even in a strongly self-modifying mind (which humans aren’t), is one of the sub-problems of the FAI problem.
The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked. This has seemed dubious ever since Asimov. The idea of baking in rules of robotics has long seemed to me to fundamentally misunderstand both the nature of morality and the nature of intelligence. But time will tell.