Well, it is not logically necessary that we have a genuine disagreement. We might be mistaken in believing ourselves to mean the same thing by the words right and wrong, since neither of us can introspectively report our own moral reference frames or unfold them fully.”
I think the idea of associating the meaning of a word with a detailed theory of fine-grained set of criteria, allowing you to apply the term in all cases, has disadvantages.
Newtonian theory has a different set of fine grained criteria about gravity than relativistic theory. If we take those criteria as define the meaning of gravity, then they must be talking about different things. If we take that as the end of the story, then
there is no way we can make sense of their being contrasting theories, or of
one theory superseding another. The one is a theory of Newton-gravity, the other of
Einstein-gravity. If we take them as both being theories of some more vaguely
and coarsely defined notion gravity, we can explain their disagreement and differing success. And we don’t have to give up on Einstein-gravity and Newton-gravity.
You cannot have a disagreement about which algorithm should direct your actions, without first having the same meaning of should—and no matter how you try to phrase this in terms of “what ought to direct your actions” or “right actions” or “correct heaps of pebbles”, in the end you will be left with the empirical fact that it is possible to construct minds directed by any coherent utility function.
I don’t see the point of this comment. No-one holds that the constraint that makes some sets of values genuinely moral is the same as the constrain that makes them implementable.
When a paperclip maximizer and a pencil maximizer do different things, they are not disagreeing about anything, they are just different optimization processes.
That are both in the superclass of optimisation processes. Why should they not both be in the class of genuinely moral optimisation processes?
You cannot detach should-ness from any specific criterion of should-ness and be left with a pure empty should-ness that the paperclip maximizer and pencil maximizer can be said to disagree about—unless you cover “disagreement” to include differences where two agents have nothing to say to each other.
Meta ethics can supply a meaning of should/ought without specifying anything specific. For instance, if you ought to maximise happiness, that doesn’t specify any action without further information about what leads to happiness.
I think the idea of associating the meaning of a word with a detailed theory of fine-grained set of criteria, allowing you to apply the term in all cases, has disadvantages.
Newtonian theory has a different set of fine grained criteria about gravity than relativistic theory. If we take those criteria as define the meaning of gravity, then they must be talking about different things. If we take that as the end of the story, then there is no way we can make sense of their being contrasting theories, or of one theory superseding another. The one is a theory of Newton-gravity, the other of Einstein-gravity. If we take them as both being theories of some more vaguely and coarsely defined notion gravity, we can explain their disagreement and differing success. And we don’t have to give up on Einstein-gravity and Newton-gravity.
I don’t see the point of this comment. No-one holds that the constraint that makes some sets of values genuinely moral is the same as the constrain that makes them implementable.
That are both in the superclass of optimisation processes. Why should they not both be in the class of genuinely moral optimisation processes?
Meta ethics can supply a meaning of should/ought without specifying anything specific. For instance, if you ought to maximise happiness, that doesn’t specify any action without further information about what leads to happiness.