What I’m trying to say is that within the theory there is no “should” apart from should_X’s. So you need to pin down which should_X you’re talking about when you run the theory through the test—you can ask “what should_Matt Matt do?” and “what should_Matt Peter do?”, or you can ask “what should_Peter Matt do?” and what “should_Peter Peter do?”, but it’s unfair to ask “what should_Matt Matt do?” and “what should_Peter Peter do?”—you’re changing the definition of “should” in the middle of the test!
Now the question is, which should_X should you use in the test? If X is running the theory through the test, X should use should_X since X is checking the theory against X’s moral intuitions. (If X is checking the test against Y’s moral intutions, then X should use should_Y). In other words, X should ask, “what should_X Matt do?” and “what should_X Peter do?”. If there is a such a thing as should_human, then if X is a human, this amounts to using should_human.
As a side note, to display”a_b” correctly, type “a\_b”
We have intutions that certain things are wrong—murder, robbery and so forth—and we have the intution that those things are wrong, not just wrong-for-peope-that-don’t-like-them. This intuition of objectivity is what makes
ethics a problem, in conjunction with the absence of obvious moral objects as part of the furniture of the world.
ETA: again, a defence of moral subjectivism seems to be needed as part of CEV..
Traditional moral subjectivism usually says that what X should do depends on who X is in some intrinsic way. In other words, when you ask “what should X do?”, the answer you get is the answer to “what should_X X do?” On EY’s theory, when you ask “what should X do?”, the answer you get is the answer to “what should_Y X do?” where Y is constant across all X’s. So “should” is a rigid designator—is corresponds to the same set of values no matter who we’re asking about.
Now the subjectivity may appear to come in because two different people might have a different Y in mind when they ask “what should X do?” The answer depends on who’s asking! Subjectivity!
Actually, no. The answer only depends on what the asker means by should. If should = should_Y, then it doesn’t matter who’s asking or who they’re asking about, we’ll get the same answer. If should = should_X, the same conclusion follows. The apparent subjectivity comes from thinking that there is a separate “should” apart from any “should_X, and then subtly changing the definition of “should” when someone different asks or someone different is asked about.
Now many metaethicists may still have a problem with the theory related to what’s driving it’s apparent subjectivity, but calling it subjective is incorrect.
I’ll note that the particular semantics I’m using are widely regarded to confuse readers into thinking the theory is a form of subjectivism or moral relativism—and frankly, I agree with the criticism. Using this terminology just so happen to be how I finally understood the theory, so it’s appealing to me. Let’s try a different terminology (hat tip to wedrifid): every time I wrote should_X, read that as would_want_X. In other words, should_X = would_want_X = X’s implicit preferences—what X would want if X were able to take into account all n-order preferences she has in our somewhat simplified example. Then, in the strongest form of EY’s theory, should = would_want_Human. In other words, only would_want_Human has normativity. Every time we ask “what should X do?” we’re asking “what would_want_Human X do?” which gives the same answer no matter who X is or who is asking the question (though nonhumans won’t often ask this question).
he answer you get is the answer to “what should_X X do?” On EY’s theory, when you ask “what should X do?”, the answer you get is the answer to “what should_Y X do?” where Y is constant across all X’s.
Y is presuably varying wth somethjng, or why put it in?.
. The apparent subjectivity comes from thinking that there is a separate “should” apart from any “should_X, and then subtly changing the definition of “should” when someone different asks or someone different is asked about.
I don’t follow. Thinkking there is a should that is separate from any should_X is the basis of objecivity.
The basis of subjectivity is having a quesstion that can be valdily answered by reference to a speakers
beliefs and desires alone. “What flavour of ice cream would I choose” works that way. So does any
other case of acti g ona prefrerence, any other “would”. Since you have equated shoulds with woulds,
the shoulds are subjective as well..
There are objective facts about what a subject would do, just as it isan objective fact that sos-and-so has a liking for Chocoalte Chip, but these objective facts don’t negate the existence of subjectivity. Something is objectice and not subjective where there are no valud answers based on reference to a subjects beliefs and desires. I don’t think that is the case here.
Then, in the strongest form of EY’s theory, should = would_want_Human. In other words, only would_want_Human has normativity.
The claim that only should_Human is normative contradicts the claim that any would-want isa a should-want. If normativity kicks in for any “would”, what does bringing in the human level add.
Every time we ask “what should X do?” we’re asking “what would_want_Human X do?” which gives the same answer no matter who X is or who is asking the question (though nonhumans won’t often ask this question).
Well, that version of the theory is objective, or intersubjecive enough. It just isnt the same as the version
of the theory that equates individual woulds and shoulds. And it relies on a convergence that might not
arrive in practice.
Y is presuably varying wth somethjng, or why put it in?.
To make it clear that “should” is just a particular “should_Y.” Or, using the other terminology, “should” is a particular “would_want_Y.”
The basis of subjectivity is having a quesstion that can be valdily answered by reference to a speakers beliefs and desires alone.
I agree with this. If the question was “how do I best satisfy my preferences?” then the answer changes with who the speaker is. But, on the theory, “should” is a rigid designator and refers ONLY to a specific should_X (or would_want_X if you prefer that terminology). So if the question is “what should I do?” That’s the same as asking “what should_X I do?” or equivalently “what would_want_X I do?” The answer is the same no matter who is asking.
The “X” is there because 1) the theory says that “should” just is a particular “should_X,” or equivalently a particular “would_want_X” and 2) there’s some uncertainty about which X belongs there. In EY’s strongest form of the theory, X = Human. A weaker form might say X = nonsociopath human.
Just to be clear, “should_Y” doesn’t have any normativity unless Y happens to be the same as the X in the previous paragraph. “Should_Y” isn’t actually a “should”—this is why I started calling it “would_want_Y” instead.
Something is objectice and not subjective where there are no valud answers based on reference to a subjects beliefs and desires. I don’t think that is the case here.
But it is. Consider the strong form where should = would_want_Human. Suppose an alien race came and modified humans so that their implicit preferences were completely changed. Is should changed? Well, no. “should” refers to a particular preference structure—a particular mathematical object. Changing the preference structure that humans would_want doesn’t change “should” any more than changing the number of eyes a human has changes “2.” Or to put it another way, distinguish between would_want_UnmodifiedHuman and would_want_ModifiedHuman. Then should = would_want_UnmodifiedHuman. “Should” refers to a particular implicit preference structure, a particular mathematical object, instantiated in some agent or group of agents.
If normativity kicks in for any “would...”
Hopefully this is clear now, but it doesn’t, even if I was calling them all “should_Y.”
What I’m trying to say is that within the theory there is no “should” apart from should_X’s. So you need to pin down which should_X you’re talking about when you run the theory through the test—you can ask “what should_Matt Matt do?” and “what should_Matt Peter do?”, or you can ask “what should_Peter Matt do?” and what “should_Peter Peter do?”, but it’s unfair to ask “what should_Matt Matt do?” and “what should_Peter Peter do?”—you’re changing the definition of “should” in the middle of the test!
Now the question is, which should_X should you use in the test? If X is running the theory through the test, X should use should_X since X is checking the theory against X’s moral intuitions. (If X is checking the test against Y’s moral intutions, then X should use should_Y). In other words, X should ask, “what should_X Matt do?” and “what should_X Peter do?”. If there is a such a thing as should_human, then if X is a human, this amounts to using should_human.
As a side note, to display”a_b” correctly, type “a\_b”
We have intutions that certain things are wrong—murder, robbery and so forth—and we have the intution that those things are wrong, not just wrong-for-peope-that-don’t-like-them. This intuition of objectivity is what makes ethics a problem, in conjunction with the absence of obvious moral objects as part of the furniture of the world.
ETA: again, a defence of moral subjectivism seems to be needed as part of CEV..
Traditional moral subjectivism usually says that what X should do depends on who X is in some intrinsic way. In other words, when you ask “what should X do?”, the answer you get is the answer to “what should_X X do?” On EY’s theory, when you ask “what should X do?”, the answer you get is the answer to “what should_Y X do?” where Y is constant across all X’s. So “should” is a rigid designator—is corresponds to the same set of values no matter who we’re asking about.
Now the subjectivity may appear to come in because two different people might have a different Y in mind when they ask “what should X do?” The answer depends on who’s asking! Subjectivity!
Actually, no. The answer only depends on what the asker means by should. If should = should_Y, then it doesn’t matter who’s asking or who they’re asking about, we’ll get the same answer. If should = should_X, the same conclusion follows. The apparent subjectivity comes from thinking that there is a separate “should” apart from any “should_X, and then subtly changing the definition of “should” when someone different asks or someone different is asked about.
Now many metaethicists may still have a problem with the theory related to what’s driving it’s apparent subjectivity, but calling it subjective is incorrect.
I’ll note that the particular semantics I’m using are widely regarded to confuse readers into thinking the theory is a form of subjectivism or moral relativism—and frankly, I agree with the criticism. Using this terminology just so happen to be how I finally understood the theory, so it’s appealing to me. Let’s try a different terminology (hat tip to wedrifid): every time I wrote should_X, read that as would_want_X. In other words, should_X = would_want_X = X’s implicit preferences—what X would want if X were able to take into account all n-order preferences she has in our somewhat simplified example. Then, in the strongest form of EY’s theory, should = would_want_Human. In other words, only would_want_Human has normativity. Every time we ask “what should X do?” we’re asking “what would_want_Human X do?” which gives the same answer no matter who X is or who is asking the question (though nonhumans won’t often ask this question).
Y is presuably varying wth somethjng, or why put it in?.
I don’t follow. Thinkking there is a should that is separate from any should_X is the basis of objecivity.
The basis of subjectivity is having a quesstion that can be valdily answered by reference to a speakers beliefs and desires alone. “What flavour of ice cream would I choose” works that way. So does any other case of acti g ona prefrerence, any other “would”. Since you have equated shoulds with woulds, the shoulds are subjective as well..
There are objective facts about what a subject would do, just as it isan objective fact that sos-and-so has a liking for Chocoalte Chip, but these objective facts don’t negate the existence of subjectivity. Something is objectice and not subjective where there are no valud answers based on reference to a subjects beliefs and desires. I don’t think that is the case here.
The claim that only should_Human is normative contradicts the claim that any would-want isa a should-want. If normativity kicks in for any “would”, what does bringing in the human level add.
Well, that version of the theory is objective, or intersubjecive enough. It just isnt the same as the version of the theory that equates individual woulds and shoulds. And it relies on a convergence that might not arrive in practice.
To make it clear that “should” is just a particular “should_Y.” Or, using the other terminology, “should” is a particular “would_want_Y.”
I agree with this. If the question was “how do I best satisfy my preferences?” then the answer changes with who the speaker is. But, on the theory, “should” is a rigid designator and refers ONLY to a specific should_X (or would_want_X if you prefer that terminology). So if the question is “what should I do?” That’s the same as asking “what should_X I do?” or equivalently “what would_want_X I do?” The answer is the same no matter who is asking.
The “X” is there because 1) the theory says that “should” just is a particular “should_X,” or equivalently a particular “would_want_X” and 2) there’s some uncertainty about which X belongs there. In EY’s strongest form of the theory, X = Human. A weaker form might say X = nonsociopath human.
Just to be clear, “should_Y” doesn’t have any normativity unless Y happens to be the same as the X in the previous paragraph. “Should_Y” isn’t actually a “should”—this is why I started calling it “would_want_Y” instead.
But it is. Consider the strong form where should = would_want_Human. Suppose an alien race came and modified humans so that their implicit preferences were completely changed. Is should changed? Well, no. “should” refers to a particular preference structure—a particular mathematical object. Changing the preference structure that humans would_want doesn’t change “should” any more than changing the number of eyes a human has changes “2.” Or to put it another way, distinguish between would_want_UnmodifiedHuman and would_want_ModifiedHuman. Then should = would_want_UnmodifiedHuman. “Should” refers to a particular implicit preference structure, a particular mathematical object, instantiated in some agent or group of agents.
Hopefully this is clear now, but it doesn’t, even if I was calling them all “should_Y.”