After all, if the humans have something worth treating as spoils, then the humans are productive and so might be even more useful alive.
Humans depend on matter to survive, and increase entropy by doing so. Matter can be used for storage and computronium, negentropy for fueling computation. Both are limited and valuable (assuming physics doesn’t allow for infinite-resource cheats) resources.
I read stuff like this and immediately my mind thinks, “comparative advantage.” The point is that it can be (and probably is) worthwhile for Bob and Bill to trade with each other even if Bob is better at absolutely everything than Bill.
Comparative advantage doesn’t matter for powerful AIs at massively different power levels. It exists between some groups of humans because humans don’t differ in intelligence all that much when you consider all of mind design space, and because humans don’t have the means to easily build subservient-to-them minds which are equal in power to them.
What about a situation where Bob can defet Bill very quickly, take all its resources, and use them to implement a totally-subservient-to-Bob mind which is by itself better at everything Bob cares about than Bill was? Resolving the conflict takes some resources, but leaving Bill to use them a) inefficiently and b) for not-exactly-Bob’s goals might waste (Bob’s perspective) even more of them in the long run. Also, eliminating Bill means Bob has to worry about one less potential threat that it would otherwise need to keep in check indefinitely.
The FAI may be an unsolvable problem, if by FAI we mean an AI into which certain limits are baked.
You don’t want to build an AI with certain goals and then add on hard-coded rules that prevent it from fulfilling those goals with maximum efficiency. If you put your own mind against that of the AI, a sufficiently powerful AI will always win that contest. The basic idea behind FAI is to build an AI that genuinely wants good things to happen; you can’t control it after it takes off, so you put in your conception of “good” (or an algorithm to compute it) into the original design, and define the AI’s terminal values based on that. Doing this right is an extremely tough technical problem, but why do you believe it may be impossible?
What about a situation where Bob can defet Bill very quickly, take all its resources, and use them to implement a totally-subservient-to-Bob mind which is by itself better at everything Bob cares about than Bill was? Resolving the conflict takes some resources, but leaving Bill to use them a) inefficiently and b) for not-exactly-Bob’s goals might waste (Bob’s perspective) even more of them in the long run. Also, eliminating Bill means Bob has to worry about one less potential threat that it would otherwise need to keep in check indefinitely. You don’t want to build an AI with certain goals and then add on hard-coded rules that prevent it from fulfilling those goals with maximum efficiency. If you put your own mind against that of the AI, a sufficiently powerful AI will always win that contest. The basic idea behind FAI is to build an AI that genuinely wants good things to happen; you can’t control it after it takes off, so you put in your conception of “good” (or an algorithm to compute it) into the original design, and define the AI’s terminal values based on that. Doing this right is an extremely tough technical problem, but why do you believe it may be impossible?