Most of the LWers who voted for moral realism probably believe that Eliezer’s position about morality is correct, and he says that morality is subjunctively objective. It definitely fits Wikipedia’s definition of moral realism:
Moral realism is the meta-ethical view which claims that:
Ethical sentences express propositions.
Some such propositions are true.
Those propositions are made true by objective features of the world, independent of subjective opinion.
To the best of my understanding, “subjunctively objective” means the same thing that “subjective” means in ordinary speech: dependent on something external, and objective once that something is specified. So Eliezer’s morality is objective once you specify that it’s his morality (or human morality, etc.) and then propositions about it can be true or false. “Turning a person into paperclips is wrong” is an ethical proposition that is Eliezer-true and Human-true and Paperclipper-false, and Eliezer’s “subjunctive objective” view is that we should just call that “true”.
I disagree with that approach because this is exactly what is called being “subjective” by most people, and so it’s misleading. As if the existing confusion over philosophical word games wasn’t bad enough.
“Turning a person into paperclips is wrong” is an ethical proposition that is Eliezer-true and Human-true and >Paperclipper-false, and Eliezer’s “subjunctive objective” view is that we should just call that “true”.
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it’s important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family’s moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It’s perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it’s a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn’t really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It’s not simply false just because it’s Human-false. (I say this, but I’m unfamiliar with Eliezer’s position. If he’s biased toward Human-[x] statements, I’d have to disagree.)
I disagree with that approach because this is exactly what is called being “subjective” by most people
Those same people are badly confused, because they usually believe that if ethical propositions are “subjective”, it means that the choice between them is arbitrary. This is an incoherent belief. Ethical propositions don’t become objective once you specify the agent’s values; they were always objective, because we can’t even think about an ethical proposition without reference to some set of values. Ethical propositions and values are logically glued together, like theorems and axioms.
You could say that the concept of something being subjective is itself a confusion, and that all propositions are objective.
That said, I share your disdain for philosophical word games. Personally, I think we should do away with words like ‘moral’ and ‘good’, and instead only talk about desires and their consequences.
Most of the LWers who voted for moral realism probably believe that Eliezer’s position about morality is correct, and he says that morality is subjunctively objective. It definitely fits Wikipedia’s definition of moral realism:
To the best of my understanding, “subjunctively objective” means the same thing that “subjective” means in ordinary speech: dependent on something external, and objective once that something is specified. So Eliezer’s morality is objective once you specify that it’s his morality (or human morality, etc.) and then propositions about it can be true or false. “Turning a person into paperclips is wrong” is an ethical proposition that is Eliezer-true and Human-true and Paperclipper-false, and Eliezer’s “subjunctive objective” view is that we should just call that “true”.
I disagree with that approach because this is exactly what is called being “subjective” by most people, and so it’s misleading. As if the existing confusion over philosophical word games wasn’t bad enough.
Despite the fact that we might have a bias toward the Human-[x] subset of moral claims, it’s important to understand that such a theory does not itself favor one over the other.
It would be like a utilitarian taking into account only his family’s moral weights in any calculations, so that a moral position might be Family-true but Strangers-false. It’s perfectly coherent to restrict the theory to a subset of its domain (and speaking of domains, it’s a bit vacuous to talk of paperclip morality, at least to the best of my knowledge of the extent of their feelings...), but that isn’t really what the theory as a whole is about.
So if we as a species were considering assimilation, and the moral evaluation of this came up Human-false but Borg-true, the theory (in principle) is perfectly well equipped to decide which would ultimately be the greater good for all parties involved. It’s not simply false just because it’s Human-false. (I say this, but I’m unfamiliar with Eliezer’s position. If he’s biased toward Human-[x] statements, I’d have to disagree.)
Those same people are badly confused, because they usually believe that if ethical propositions are “subjective”, it means that the choice between them is arbitrary. This is an incoherent belief. Ethical propositions don’t become objective once you specify the agent’s values; they were always objective, because we can’t even think about an ethical proposition without reference to some set of values. Ethical propositions and values are logically glued together, like theorems and axioms.
You could say that the concept of something being subjective is itself a confusion, and that all propositions are objective.
That said, I share your disdain for philosophical word games. Personally, I think we should do away with words like ‘moral’ and ‘good’, and instead only talk about desires and their consequences.
This is why I voted for moral realism. If instead Moral realism is supposed to mean something stronger, then I’m probably not a moral realist.
The entire issue is a bit of a mess.
http://plato.stanford.edu/entries/moral-anti-realism/