Sometimes, they are even divided on psychological questions that psychologists have already answered...
I think you’ve misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They’re just mouthing the words.
This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.
I’m not saying the philosophical debate is interesting or important (or that it’s not), but the claim that psychologists have settled the question relies on an equivocation on ‘moral judgement’: in the psychological study, giving an answer to a moral question which comports with answers given by healthy people is a sufficient condition on moral judgement. For philosophers, it is neither necessary, not sufficient. Clearly, they are not talking about the same thing.
That sounds like an interesting question! If you’ll forgive me answering your question with another, do you think that this is the kind of question psychology can answer, and if so, what kind of evidential result would help answer it?
Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment. A tentative definition might come from the following procedure: ask a wide selection of people to make what would colloquially be referred to as moral judgments and see what parts of their brains light up. If there’s a common light-up pattern to basic moral judgments about things like murder, then we might call that neurological event a moral judgment. Part of this light-up pattern might be missing in the brain-damaged people.
Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment.
But that’s the philosophical debate!
As to your definition, notice the following problem: suppose you get a healthy person answering a moral question. Region A and B of their brain lights up. Now you go to the brain damaged person, and in response to the same moral question only region A lights up. You also notice that the healthy person is motivated to act on the moral judgement, while the brain damaged person is not. So you conclude that B has something to do with motivation.
So do you define a moral judgement as ‘the lighting up of A and B’ or just ‘the lighting up of A’? Notice that nothing about the result you’ve observed seems to answer or even address that question. You can presuppose that it’s A, or both A and B, but then you’ve assumed an answer to the philosophical debate. There’s a big difference between assuming an answer, and answering.
Okay, good idea, let’s taboo moral judgement. So your definition from the great grandparent was (I’m paraphrasing) “the activity of the brain in response to what are colloquially referred to as moral judgements.” What should we replace ‘moral judgement’ with in this definition?
I assume it’s clear that we can’t replace it with ‘the activity of the brain...’
(ETA: For the record, if tabooing in this way is your strategy, I think you’re with me in rejecting Luke’s claim that psychology has settled a the externalism vs. internalism question. At the very best, psychology has rejected the question, not solved it. But much more likely, since philosophers probably won’t taboo ‘moral judgement’ the way you have (i.e. in terms of brain states) psychology is simply discussing a different topic.)
″...in response to questions about whether it is right to kill people in various situations, or take things from people in various situations, or more generally to impose one’s will on another person in a way that would have had significance in the ancestral environment.” (This is based on my own intuition that people process judgments about ancestral-environment-type things like murder differently from the way people process judgments about non-ancestral-environment-type things like copyright law. I could be wrong about this.)
That’s fine, but it doesn’t address the problem I described in the great great grandparent of this reply. Either you mean the brain activity of a healthy person, or the brain activity common to healthy and brain-damaged people. Even if philosophers intend to be discussing brain processes (which, in almost every case, they do not) then you’ve assumed an answer, not given one.
But in any case, this way of tabooing ‘moral judgement’ makes it very clear that the question the psychologist is discussing is not the question the philosopher is discussing.
Well, this isn’t something I’m an expert in. Most of my knowledge of the topic comes from this SEP article, which I would in any case just be summarizing if I tried to explain the debate. The article is much clearer than I’m likely to be. So you’re probably just better off reading that, especially the intro and section 3: http://plato.stanford.edu/entries/moral-motivation/
That article uses the phrase ‘moral judgement’ of course, but anyway I think tabooing the term (rather than explaining and then using it) is probably counterproductive.
I think you’ve misunderstood the debate: philosophers are arguing in this case over whether or not moral judgements are intrinsically motivating. If they are, then the brain-damaged people you make reference to are (according to moral judgement internalizes) not really making moral judgements. They’re just mouthing the words.
This is just to say that psychology has answered a certain question, but not the question that philosophers debating this point are concerned about.
This pattern-matches an awful lot to “if a tree falls in a forest...”
Yeah, but at a sufficiently low resolution (such as my description), lots of stuff pattern-matches, so: http://plato.stanford.edu/entries/moral-motivation/#MorJudMot
I’m not saying the philosophical debate is interesting or important (or that it’s not), but the claim that psychologists have settled the question relies on an equivocation on ‘moral judgement’: in the psychological study, giving an answer to a moral question which comports with answers given by healthy people is a sufficient condition on moral judgement. For philosophers, it is neither necessary, not sufficient. Clearly, they are not talking about the same thing.
How do I know whether anyone is making moral judgments as opposed to mouthing the words?
That sounds like an interesting question! If you’ll forgive me answering your question with another, do you think that this is the kind of question psychology can answer, and if so, what kind of evidential result would help answer it?
Well, I was hoping you would answer with at least a definition of what constitutes a moral judgment. A tentative definition might come from the following procedure: ask a wide selection of people to make what would colloquially be referred to as moral judgments and see what parts of their brains light up. If there’s a common light-up pattern to basic moral judgments about things like murder, then we might call that neurological event a moral judgment. Part of this light-up pattern might be missing in the brain-damaged people.
But that’s the philosophical debate!
As to your definition, notice the following problem: suppose you get a healthy person answering a moral question. Region A and B of their brain lights up. Now you go to the brain damaged person, and in response to the same moral question only region A lights up. You also notice that the healthy person is motivated to act on the moral judgement, while the brain damaged person is not. So you conclude that B has something to do with motivation.
So do you define a moral judgement as ‘the lighting up of A and B’ or just ‘the lighting up of A’? Notice that nothing about the result you’ve observed seems to answer or even address that question. You can presuppose that it’s A, or both A and B, but then you’ve assumed an answer to the philosophical debate. There’s a big difference between assuming an answer, and answering.
Neither. You taboo “moral judgment.” From there, as far as I can tell, the question is dissolved.
Okay, good idea, let’s taboo moral judgement. So your definition from the great grandparent was (I’m paraphrasing) “the activity of the brain in response to what are colloquially referred to as moral judgements.” What should we replace ‘moral judgement’ with in this definition?
I assume it’s clear that we can’t replace it with ‘the activity of the brain...’
(ETA: For the record, if tabooing in this way is your strategy, I think you’re with me in rejecting Luke’s claim that psychology has settled a the externalism vs. internalism question. At the very best, psychology has rejected the question, not solved it. But much more likely, since philosophers probably won’t taboo ‘moral judgement’ the way you have (i.e. in terms of brain states) psychology is simply discussing a different topic.)
″...in response to questions about whether it is right to kill people in various situations, or take things from people in various situations, or more generally to impose one’s will on another person in a way that would have had significance in the ancestral environment.” (This is based on my own intuition that people process judgments about ancestral-environment-type things like murder differently from the way people process judgments about non-ancestral-environment-type things like copyright law. I could be wrong about this.)
How would a philosopher taboo “moral judgment”?
That’s fine, but it doesn’t address the problem I described in the great great grandparent of this reply. Either you mean the brain activity of a healthy person, or the brain activity common to healthy and brain-damaged people. Even if philosophers intend to be discussing brain processes (which, in almost every case, they do not) then you’ve assumed an answer, not given one.
But in any case, this way of tabooing ‘moral judgement’ makes it very clear that the question the psychologist is discussing is not the question the philosopher is discussing.
In that case I don’t understand the question the philosopher is discussing. Can you explain it to me without using the phrase “moral judgment”?
Well, this isn’t something I’m an expert in. Most of my knowledge of the topic comes from this SEP article, which I would in any case just be summarizing if I tried to explain the debate. The article is much clearer than I’m likely to be. So you’re probably just better off reading that, especially the intro and section 3: http://plato.stanford.edu/entries/moral-motivation/
That article uses the phrase ‘moral judgement’ of course, but anyway I think tabooing the term (rather than explaining and then using it) is probably counterproductive.
I’d of course be happy to discuss the article.