Perhaps, but we already know that most people (and groups) are not Friendly. Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.
In short, if there were mutually acceptable ways to divide the limited resources, we’d already be dividing the resources those ways. The increased wealth from the industrial revolution and information revolution have reduced certain kinds of conflicts, but haven’t abolished conflict. Unfortunately, it doesn’t seem like the increased-wealth-effect of AI is any likelier to abolish conflict—Friendly is a separate property that we’d like the AI to have that would solve this problem.
Perhaps, but we already know that most people (and groups) are not Friendly
Not clear what you refer to by “Friendly” (I think this should be tabooed rather than elaborated), no idea what the relevance of properties of humans is in this context.
Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.
I sketched a particular device, for you to evaluate. Whether it’s “Friendly-to-all” is a more vague question than that (and I’m not sure what you understand by that concept), so I think should be avoided. The relevant question is whether you would prefer the device I described (where you personally get the 1/Nth part of the universe with a genie to manage it) to deleting the Earth and everyone on it. In this context, even serious flaws (such as some of the other parts of the universe being mismanaged) may become irrelevant to the decision.
Perhaps, but we already know that most people (and groups) are not Friendly. Making them more powerful by giving them safe-for-them genies seems unlikely to sum to Friendly-to-all.
In short, if there were mutually acceptable ways to divide the limited resources, we’d already be dividing the resources those ways. The increased wealth from the industrial revolution and information revolution have reduced certain kinds of conflicts, but haven’t abolished conflict. Unfortunately, it doesn’t seem like the increased-wealth-effect of AI is any likelier to abolish conflict—Friendly is a separate property that we’d like the AI to have that would solve this problem.
Not clear what you refer to by “Friendly” (I think this should be tabooed rather than elaborated), no idea what the relevance of properties of humans is in this context.
I sketched a particular device, for you to evaluate. Whether it’s “Friendly-to-all” is a more vague question than that (and I’m not sure what you understand by that concept), so I think should be avoided. The relevant question is whether you would prefer the device I described (where you personally get the 1/Nth part of the universe with a genie to manage it) to deleting the Earth and everyone on it. In this context, even serious flaws (such as some of the other parts of the universe being mismanaged) may become irrelevant to the decision.