Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
Yeah, that’s the only clear conclusion. The general approach of moral argument is to try to say that one of your intuitions (whether the not caring about it being killed offstage or not enjoying throttling it) is the true/valid one and the others should be overruled. Honestly not sure where I stand on this.
I don’t think that “not enjoying killing a chicken” should be described as an “intuition”. Moral intuitions generally take the form of “it seems to me that / I strongly feel that so-and-so is the right thing to do / the wrong thing to do / bad / good / etc.” What you do or do not enjoy doing is a preference, like enjoying chocolate ice cream, not enjoying ice skating, being attracted to blondes, etc. Preferences can’t be “true” or “false”, they’re just facts about your mental makeup. (It may make sense to describe a preference as “invalid” in certain senses, however, but not obviously any senses relevant to this current discussion.)
So for instance “I think killing a chicken is morally ok” (a moral intuition) and “I don’t like killing chickens” (a preference) do not conflict with each other any more than “I think homosexuality is ok” and “I am heterosexual” conflict with each other, or “Being a plumber is ok (and in fact plumbers are necessary members of society)” and “I don’t like looking inside my plumbing”.
Now, if you wanted to take this discussion to a slightly more subtle level, you might say: “This is different! Killing chickens causes in me a kind of psychic distress usually associated with witnessing or performing acts that I also consider to be immoral! Surely this is evidence that this, too, is immoral?” To that I can respond only that yes, this may be evidence in the strict Bayesian sense, but the signals your brain generates may be flawed. We should evaluate the ethical status of the act in question explicitly; yes, we should take moral intuitions into account, but my intuitions, at least, is that chicken-killing is fine, despite having no desire to do it myself. This screens off the “agh I don’t want to do/watch this!” signal.
The dividing lines between the kinds of cognitive states I’m inclined to call “moral intuitions” and the kinds of cognitive states I’m inclined to call “preferences” and the kinds of cognitive states I’m inclined to call “psychic distress” are not nearly as sharp, in my experience, as you seem to imply here. There’s a lot of overlap, and in particular the states I enter surrounding activities like killing animals (especially cute animals with big eyes) don’t fall crisply into just one category.
But, sure, if we restrict the discussion to activities where those categories are crisply separated, those distinctions are very useful.
Mm. If you mean to suggest that the outcome of moral reasoning is necessarily that one of my intuitions gets endorsed, then I disagree; I would expect worthwhile moral reasoning to sometimes endorse claims that my intuition didn’t provide in the first place, as well as claims that my intuitions consistently reject.
In particular, when my moral intuitions conflict (or,as SaidAchmiz suggests, when the various states that I have a hard time cleanly distinguishing from my moral intuitions despite not actually being any such thing conflict), I usually try to envision patterning the world in different ways that map in some fashion to some weighting of those states, ask myself what the expected end result of that patterning is, see whether I have clear preferences among those expected endpoints, work backwards from my preferred endpoint to the associated state-weighting, and endorse that state-weighting.
The result of that process is sometimes distressingly counter-moral-intuitive.
Sorry, I was unclear: I meant moral (and political) arguments from other people—moral rhetoric if you like—often takes that form.
Ah, gotcha. Yeah, that’s true.