2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. “coherent extrapolated volition” is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn’t quite match our values now.
3. Again, I think there’s a distinction you’re missing when you talk about “removal of values” etc. Let’s take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn’t value that. (As, I think, most humans don’t.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn’t imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it.
But, I hear you say, aren’t those fanfiction works made of—or at least stored in—atoms that the Master of the Universe can use for something else? Sure, they are, and if there’s literally nothing in the MotU’s values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction—such as, I claim, a preference for people to get what they want.
There might be circumstances in which the MotU does repurpose those atoms: perhaps there’s something else it values vastly more that it can’t get any other way. But the same is true right here in this universe, in which we’re getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to Facebook, or a server owned by an individual in the US who gets a terrible health problem and needs to sell everything to raise funds for treatment, then that server is probably toast, and if no one else has a copy then the fanfiction is gone. What makes a superintelligent AI more dangerous here, it seems to me, is that maybe no one can figure out how to give it even humanish values. But that’s not a problem that has much to do with the divergence within the range of human values: again, “just copy Barack Obama’s values” (feel free to substitute someone whose values you like better, of course) is a counterexample, because most likely even an omnipotent Barack Obama would not feel the need to take away your guns^H^H^H^Hfanfiction.
To reiterate the point I think you’ve been missing: giving supreme power to (say) a superintelligent AI doesn’t remove from existence all those people who value things it happens not to care about, and if it cares about their welfare then we should not expect it to wipe them out or to wipe out the things they value.
2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. “coherent extrapolated volition” is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn’t quite match our values now.
3. Again, I think there’s a distinction you’re missing when you talk about “removal of values” etc. Let’s take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn’t value that. (As, I think, most humans don’t.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn’t imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it.
But, I hear you say, aren’t those fanfiction works made of—or at least stored in—atoms that the Master of the Universe can use for something else? Sure, they are, and if there’s literally nothing in the MotU’s values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction—such as, I claim, a preference for people to get what they want.
There might be circumstances in which the MotU does repurpose those atoms: perhaps there’s something else it values vastly more that it can’t get any other way. But the same is true right here in this universe, in which we’re getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to Facebook, or a server owned by an individual in the US who gets a terrible health problem and needs to sell everything to raise funds for treatment, then that server is probably toast, and if no one else has a copy then the fanfiction is gone. What makes a superintelligent AI more dangerous here, it seems to me, is that maybe no one can figure out how to give it even humanish values. But that’s not a problem that has much to do with the divergence within the range of human values: again, “just copy Barack Obama’s values” (feel free to substitute someone whose values you like better, of course) is a counterexample, because most likely even an omnipotent Barack Obama would not feel the need to take away your guns^H^H^H^Hfanfiction.
To reiterate the point I think you’ve been missing: giving supreme power to (say) a superintelligent AI doesn’t remove from existence all those people who value things it happens not to care about, and if it cares about their welfare then we should not expect it to wipe them out or to wipe out the things they value.