1. On the deontology/virtue ethics vs consequentialism thing, you’re right I don’t know how I missed that, thanks!
1a. I’ll have to think about that a bit more.
2. Well, if we were just going off of the four moralities I described, then I already named two examples where two of those moralities are unable to converge: a pure flourishing maximizer wouldn’t want to mercy kill the human species, but a pure suffering minimizer would. A pure flourishing maximizer would be willing to have one person tortured forever if that was a necessary prerequisite for uplifting the rest of the human species into a transhumanist utopia. A suffering minimizer would not. Even if the four moralities I described only cover a small fraction of moral behaviors, then wouldn’t that still be a hard counterexample to the idea that there is convergence?
3. I think when you said “within the normal range of generally-respected human values”, I took that literally, meaning I thought it excluded values which were not in the normal range and not generally respected even if they are things like “reading Adult My Little Pony fanfiction”. Not every value which isn’t well respected or in the normal range would make the world a better place through its removal. I thought that would be self-evident to everyone here, and so I didn’t explain it. And then it looked to me like you were trying to justify the removal of all values which aren’t generally respected or within the normal range as being “okay”. So when you said ” Right now, there are no agents around (that we know of) whose values are entirely outside the range of human values, and we’re getting on OK.” I thought it was intended to be in support of the removal of all values which aren’t well respected or in the normal range. But if you’re trying to support the removal of niche values in particular, saying that current humans are getting along fine with their current whole range of values, which one would presume must include the niche values, does not make sense.
About to fall asleep, I’ll write more of my response later.
2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. “coherent extrapolated volition” is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn’t quite match our values now.
3. Again, I think there’s a distinction you’re missing when you talk about “removal of values” etc. Let’s take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn’t value that. (As, I think, most humans don’t.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn’t imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it.
But, I hear you say, aren’t those fanfiction works made of—or at least stored in—atoms that the Master of the Universe can use for something else? Sure, they are, and if there’s literally nothing in the MotU’s values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction—such as, I claim, a preference for people to get what they want.
There might be circumstances in which the MotU does repurpose those atoms: perhaps there’s something else it values vastly more that it can’t get any other way. But the same is true right here in this universe, in which we’re getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to Facebook, or a server owned by an individual in the US who gets a terrible health problem and needs to sell everything to raise funds for treatment, then that server is probably toast, and if no one else has a copy then the fanfiction is gone. What makes a superintelligent AI more dangerous here, it seems to me, is that maybe no one can figure out how to give it even humanish values. But that’s not a problem that has much to do with the divergence within the range of human values: again, “just copy Barack Obama’s values” (feel free to substitute someone whose values you like better, of course) is a counterexample, because most likely even an omnipotent Barack Obama would not feel the need to take away your guns^H^H^H^Hfanfiction.
To reiterate the point I think you’ve been missing: giving supreme power to (say) a superintelligent AI doesn’t remove from existence all those people who value things it happens not to care about, and if it cares about their welfare then we should not expect it to wipe them out or to wipe out the things they value.
1. On the deontology/virtue ethics vs consequentialism thing, you’re right I don’t know how I missed that, thanks!
1a. I’ll have to think about that a bit more.
2. Well, if we were just going off of the four moralities I described, then I already named two examples where two of those moralities are unable to converge: a pure flourishing maximizer wouldn’t want to mercy kill the human species, but a pure suffering minimizer would. A pure flourishing maximizer would be willing to have one person tortured forever if that was a necessary prerequisite for uplifting the rest of the human species into a transhumanist utopia. A suffering minimizer would not. Even if the four moralities I described only cover a small fraction of moral behaviors, then wouldn’t that still be a hard counterexample to the idea that there is convergence?
3. I think when you said “within the normal range of generally-respected human values”, I took that literally, meaning I thought it excluded values which were not in the normal range and not generally respected even if they are things like “reading Adult My Little Pony fanfiction”. Not every value which isn’t well respected or in the normal range would make the world a better place through its removal. I thought that would be self-evident to everyone here, and so I didn’t explain it. And then it looked to me like you were trying to justify the removal of all values which aren’t generally respected or within the normal range as being “okay”. So when you said ” Right now, there are no agents around (that we know of) whose values are entirely outside the range of human values, and we’re getting on OK.” I thought it was intended to be in support of the removal of all values which aren’t well respected or in the normal range. But if you’re trying to support the removal of niche values in particular, saying that current humans are getting along fine with their current whole range of values, which one would presume must include the niche values, does not make sense.
About to fall asleep, I’ll write more of my response later.
2. Again, there are plenty of counterexamples to the idea that human values have already converged. The idea behind e.g. “coherent extrapolated volition” is that (a) they might converge given more information, clearer thinking, and more opportunities for those with different values to discuss, and (b) we might find the result of that convergence acceptable even if it doesn’t quite match our values now.
3. Again, I think there’s a distinction you’re missing when you talk about “removal of values” etc. Let’s take your example: reading adult MLP fanfiction. Suppose the world is taken over by some being that doesn’t value that. (As, I think, most humans don’t.) What are the consequences for those people who do value it? Not necessarily anything awful, I suggest. Not valuing reading adult MLP fanfiction doesn’t imply (e.g.) an implacable war against those who do. Why should it? It suffices that the being that takes over the world cares about people getting what they want; in that case, if some people like to write adult MLP fanfiction and some people like to read it, our hypothetical superpowerful overlord will likely prefer to let those people get on with it.
But, I hear you say, aren’t those fanfiction works made of—or at least stored in—atoms that the Master of the Universe can use for something else? Sure, they are, and if there’s literally nothing in the MotU’s values to stop it repurposing them then it will. But there are plenty of things that can stop the MotU repurposing those atoms other than its own fondness for adult MLP fanfiction—such as, I claim, a preference for people to get what they want.
There might be circumstances in which the MotU does repurpose those atoms: perhaps there’s something else it values vastly more that it can’t get any other way. But the same is true right here in this universe, in which we’re getting on OK. If your fanfiction is hosted on a server that ends up in a war zone, or a server owned by a company that gets sold to Facebook, or a server owned by an individual in the US who gets a terrible health problem and needs to sell everything to raise funds for treatment, then that server is probably toast, and if no one else has a copy then the fanfiction is gone. What makes a superintelligent AI more dangerous here, it seems to me, is that maybe no one can figure out how to give it even humanish values. But that’s not a problem that has much to do with the divergence within the range of human values: again, “just copy Barack Obama’s values” (feel free to substitute someone whose values you like better, of course) is a counterexample, because most likely even an omnipotent Barack Obama would not feel the need to take away your guns^H^H^H^Hfanfiction.
To reiterate the point I think you’ve been missing: giving supreme power to (say) a superintelligent AI doesn’t remove from existence all those people who value things it happens not to care about, and if it cares about their welfare then we should not expect it to wipe them out or to wipe out the things they value.