“Turning a person into paperclips is wrong” is an ethical proposition that is Eliezer-true and Human-true and >Paperclipper-false, and Eliezer’s “subjunctive objective” view is that we should just call that “true”.
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it’s important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family’s moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It’s perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it’s a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn’t really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It’s not simply false just because it’s Human-false. (I say this, but I’m unfamiliar with Eliezer’s position. If he’s biased toward Human-[x] statements, I’d have to disagree.)
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it’s important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family’s moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It’s perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it’s a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn’t really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It’s not simply false just because it’s Human-false. (I say this, but I’m unfamiliar with Eliezer’s position. If he’s biased toward Human-[x] statements, I’d have to disagree.)