Externalism: It is possible for a person to sincerely hold a moral belief (or make a moral judgment) without feeling any motivation to adhere to that belief/judgment. The claim is not just that the motivation might be trumped by other motivations, it is that it is possible for there to be no motivation at all.
Internalism: It is impossible to sincerely make a moral judgment without being motivated to act in accordance with it, although it may be the case that the motivation is trumped by other countervailing motivations.
Other: This is a not-very-interesting definitional question as to exactly which kind of mental states should be counted as “sincerely making a moral judgement”.
General defense of the above type of reply: Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
The subject matter of humanity::morality is a mathematical object which Clippy could calculate, if it ever had any reason to do so, which it wouldn’t, but it could, without being at all motivated to do anything about that. However, if “morality” is being given an agent relative definition then no, whatever you’re not motivated to do anything about, even in the slightest, doesn’t seem like it should be called Alejandro::morality.
Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
I doubt it. In my experience, if you allow a “Please specify” answer, philosophers will pick that for practically any distinction.
Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
It is, at any rate, if you have some evidence that philosophers (professional? historical? what?) make up definitions and run with them when they don’t understand a question.
Otherwise, it’s probably a very, very bad barometer.
Suppose a psychopath says “I know killing people is wrong but I don’t care” and kills someone. Wearing an externalist hat, I say “See? They made a moral judgment, but it doesn’t constrain their behavior.” Wearing an internalist hat, I say either “Maybe it did constrain their behavior, just not enough to prevent them from killing someone, because other factors motivated their behavior that aren’t present in non-psychopaths” or “Maybe it wasn’t a sincere moral judgment, they were just echoing what they’ve been told, like a blind person saying stop signs are red.”
That said, I can imagine well-designed experiments that would be evidence for one or the other. E.g., if we identify what parts of the brain are generally engaged during moral judgments, and what parts of the brain are engaged during recitation of memorized facts, and determine that when psychopaths say “killing people is wrong” the latter brain areas are engaged but the former are not, I would consider that evidence that psychopaths don’t in fact make sincere moral judgments when they say that.
We know from the effects of morphine that it is possible to experience “pain” without it “hurting”. (I wonder if any philosopher foresaw that this is even possible?) Given that, it is quite conceivable to me that a psychopath might experience the feeling of “moral wrongness” without the motivation “I shouldn’t do this”. Maybe that isn’t what’s going on, but no process of reasoning about morality can rule it out.
Philosophers, even materialist ones, are apt to think of the mind as being some sort of logical entity, about which they can prove that certain mental behaviours are impossible. But when the brain goes funny, all of those arguments fail. Which implies that when the brain hasn’t gone funny, the arguments still don’t prove anything, because how do we know that even a normal brain doesn’t do other “absurd” things? Only by observing how it actually works, not by any process of pure reason.
Compare “akrasia”. How can you want to do a thing, have the ability, and yet not do it, all the while bemoaning the fact that you aren’t doing it? Obviously absurd, impossible, a contradiction. But the fact that people do go funny in this way is so familiar to everyone that philosophers can’t get far by arguing that it can’t happen.
I certainly agree that we can discover things about the world that make us realize that some phenomenon P that we naively thought was indivisible in fact turns out to have internal structure, such that an event can demonstrate some-but-not-all-of P.
I’m mostly of the opinion that when that sort of thing happens, the best move is to get very clear about what level of abstraction we’re talking about, what concepts apply at that level, and how we refer to those concepts. Sometimes it’s useful to talk about “cells”, but if I’m interested in how mitochondria interact with the endoplasmic reticulum the concept to which “cell” refers isn’t useful to me; I need new concepts (and it’s probably best if I don’t assign them homophonic labels).
Are people on morphine actually experiencing pain? Yes, they are. No, they aren’t. It’s mostly not a helpful question. They are experiencing some properties of pain and not others, and what we’ve discovered is that our normal experience of pain has separable internal structure.
So far, so good. And, sure, when an actor playing Oedipus screams and yells about how painful having his eyes plucked out is, he’s experiencing some of the properties of pain, and it can be interesting to consider which properties those are. But I can also jump up a level and ask whether the actor is really experiencing pain, and on balance my answer is pretty unambiguously “no.” (Though I’m prepared to make exceptions for certain actors and performances, given enough counterevidence.)
Similarly, if I experience akrasia, that tells me that motivation has internal structure, and I can experience some of its components (e.g., the belief that I want to do something) without others (e.g., the actually-doing-it). At a lower level, I can usefully consider the relationship between those components. At a higher level, I can ask whether I’m really motivated to do it. (My usual answer is “no.” I understand that judgments vary on this one.)
So, OK. I agree that moral judgment has internal structure, and it’s possible for people to experience/demonstrate some properties and not others, and in particular that it’s possible (at least in principle) to fail to demonstrate the behavior-changing properties of moral judgment while still demonstrating everything else.
But, just as in the other two cases, the question arises: if I jump up a level of abstraction and ask whether the thing demonstrated in that case is actually a moral judgment or not, what’s my answer?
I’m inclined to say it isn’t, as above.
But I grant that this is ultimately a question about what the label “moral judgment” properly attaches to, and I’m happy to use different language when that makes communication clearer. And if I want to think more carefully about cases where people go funny in this way, the question of whether it’s really a moral judgment stops being a helpful question, because the answer is “yes and no.”
Maybe it wasn’t a sincere moral judgment, they were just echoing what they’ve been told, like a blind person saying stop signs are red.
But a blind person can still sincerely say that “stop signs are red”. Their justification for saying so may be different from a sighted person’s, but the statement is still sincere (the blind person really believes it).
Is it part of the internalist claim that it is just impossible to acquire moral knowledge by such third-party means? For instance, simply observing what other people say about “right” and “wrong” and building an inductive concept about what actions the words describe, but without any emotional preference for “right” over “wrong”?
I can’t speak for internalism, but it certainly seems to me that what I’m doing when I say that it’s wrong for me to eat pork by the standards of Judaism is different from what i would be doing, were I an observant Jew, if I said it was wrong for me to eat pork.
But still, how does this break the analogy with “red” for a blind person? For instance, a blind person could sincerely believe all of the following:
“Stop signs are red”
“My evidence for believing that stop signs are red is that sighted people have told me so”
“Red things aren’t red just because people say they are. Rather they have some inherent property to do with the way light reflects off them which causes people to describe them as red”
“Some religious sects call certain signs red which everyone else calls orange, but these sects are wrong; the signs are in fact orange. Members of the sects have been brought up that way, and are sincere in their beliefs, but they are still wrong”
“If I were a member of such a sect, I’d probably also believe that the orange signs are red”
In short it seems possible to have a complete and sincere set of beliefs in “red” as an objective world property, without ever seeing red yourself or ever having any emotional response to it. I don’t immediately see why the same can’t be true for moral beliefs.
It’s certainly true that there are two different things being done by psychopaths and nonpsychopaths in the original example, but it might be that both of those things count as genuine moral beliefs, just as the two different things involving a stop sign can count as genuine beliefs about color.
OTOH, it might be that only one of them counts as a genuine moral belief, just as only one of the things involving a stop sign counts as a genuine perception of color.
I accept externalism on the part of some possible sentient creatures, but I think this question with respect to humans is a question of definitions, and even if you decide on a definition it becomes an empirical question I don’t know the answer to.
Moral judgment: externalism or internalism?
[pollid:85]
Externalism: It is possible for a person to sincerely hold a moral belief (or make a moral judgment) without feeling any motivation to adhere to that belief/judgment. The claim is not just that the motivation might be trumped by other motivations, it is that it is possible for there to be no motivation at all.
Internalism: It is impossible to sincerely make a moral judgment without being motivated to act in accordance with it, although it may be the case that the motivation is trumped by other countervailing motivations.
Other: This is a not-very-interesting definitional question as to exactly which kind of mental states should be counted as “sincerely making a moral judgement”.
General defense of the above type of reply: Voting “Other” on questions that seem to you confused or seem to turn on irrelevant matters of small definitions, rather than making up a definition and running with it, etcetera, is probably a good barometer of LW-vs.-philosophy opinion.
The subject matter of humanity::morality is a mathematical object which Clippy could calculate, if it ever had any reason to do so, which it wouldn’t, but it could, without being at all motivated to do anything about that. However, if “morality” is being given an agent relative definition then no, whatever you’re not motivated to do anything about, even in the slightest, doesn’t seem like it should be called Alejandro::morality.
I doubt it. In my experience, if you allow a “Please specify” answer, philosophers will pick that for practically any distinction.
It is, at any rate, if you have some evidence that philosophers (professional? historical? what?) make up definitions and run with them when they don’t understand a question.
Otherwise, it’s probably a very, very bad barometer.
I agree with this, even though I voted “externalism”.
Don’t psychopaths count as evidence for “externalism”? (They know what they’re doing is wrong but don’t care.)
Not in and of itself.
Suppose a psychopath says “I know killing people is wrong but I don’t care” and kills someone. Wearing an externalist hat, I say “See? They made a moral judgment, but it doesn’t constrain their behavior.” Wearing an internalist hat, I say either “Maybe it did constrain their behavior, just not enough to prevent them from killing someone, because other factors motivated their behavior that aren’t present in non-psychopaths” or “Maybe it wasn’t a sincere moral judgment, they were just echoing what they’ve been told, like a blind person saying stop signs are red.”
That said, I can imagine well-designed experiments that would be evidence for one or the other. E.g., if we identify what parts of the brain are generally engaged during moral judgments, and what parts of the brain are engaged during recitation of memorized facts, and determine that when psychopaths say “killing people is wrong” the latter brain areas are engaged but the former are not, I would consider that evidence that psychopaths don’t in fact make sincere moral judgments when they say that.
We know from the effects of morphine that it is possible to experience “pain” without it “hurting”. (I wonder if any philosopher foresaw that this is even possible?) Given that, it is quite conceivable to me that a psychopath might experience the feeling of “moral wrongness” without the motivation “I shouldn’t do this”. Maybe that isn’t what’s going on, but no process of reasoning about morality can rule it out.
Philosophers, even materialist ones, are apt to think of the mind as being some sort of logical entity, about which they can prove that certain mental behaviours are impossible. But when the brain goes funny, all of those arguments fail. Which implies that when the brain hasn’t gone funny, the arguments still don’t prove anything, because how do we know that even a normal brain doesn’t do other “absurd” things? Only by observing how it actually works, not by any process of pure reason.
Compare “akrasia”. How can you want to do a thing, have the ability, and yet not do it, all the while bemoaning the fact that you aren’t doing it? Obviously absurd, impossible, a contradiction. But the fact that people do go funny in this way is so familiar to everyone that philosophers can’t get far by arguing that it can’t happen.
I certainly agree that we can discover things about the world that make us realize that some phenomenon P that we naively thought was indivisible in fact turns out to have internal structure, such that an event can demonstrate some-but-not-all-of P.
I’m mostly of the opinion that when that sort of thing happens, the best move is to get very clear about what level of abstraction we’re talking about, what concepts apply at that level, and how we refer to those concepts. Sometimes it’s useful to talk about “cells”, but if I’m interested in how mitochondria interact with the endoplasmic reticulum the concept to which “cell” refers isn’t useful to me; I need new concepts (and it’s probably best if I don’t assign them homophonic labels).
Are people on morphine actually experiencing pain? Yes, they are. No, they aren’t. It’s mostly not a helpful question. They are experiencing some properties of pain and not others, and what we’ve discovered is that our normal experience of pain has separable internal structure.
So far, so good. And, sure, when an actor playing Oedipus screams and yells about how painful having his eyes plucked out is, he’s experiencing some of the properties of pain, and it can be interesting to consider which properties those are. But I can also jump up a level and ask whether the actor is really experiencing pain, and on balance my answer is pretty unambiguously “no.” (Though I’m prepared to make exceptions for certain actors and performances, given enough counterevidence.)
Similarly, if I experience akrasia, that tells me that motivation has internal structure, and I can experience some of its components (e.g., the belief that I want to do something) without others (e.g., the actually-doing-it). At a lower level, I can usefully consider the relationship between those components. At a higher level, I can ask whether I’m really motivated to do it. (My usual answer is “no.” I understand that judgments vary on this one.)
So, OK. I agree that moral judgment has internal structure, and it’s possible for people to experience/demonstrate some properties and not others, and in particular that it’s possible (at least in principle) to fail to demonstrate the behavior-changing properties of moral judgment while still demonstrating everything else.
But, just as in the other two cases, the question arises: if I jump up a level of abstraction and ask whether the thing demonstrated in that case is actually a moral judgment or not, what’s my answer?
I’m inclined to say it isn’t, as above.
But I grant that this is ultimately a question about what the label “moral judgment” properly attaches to, and I’m happy to use different language when that makes communication clearer. And if I want to think more carefully about cases where people go funny in this way, the question of whether it’s really a moral judgment stops being a helpful question, because the answer is “yes and no.”
But a blind person can still sincerely say that “stop signs are red”. Their justification for saying so may be different from a sighted person’s, but the statement is still sincere (the blind person really believes it).
Is it part of the internalist claim that it is just impossible to acquire moral knowledge by such third-party means? For instance, simply observing what other people say about “right” and “wrong” and building an inductive concept about what actions the words describe, but without any emotional preference for “right” over “wrong”?
I can’t speak for internalism, but it certainly seems to me that what I’m doing when I say that it’s wrong for me to eat pork by the standards of Judaism is different from what i would be doing, were I an observant Jew, if I said it was wrong for me to eat pork.
But still, how does this break the analogy with “red” for a blind person? For instance, a blind person could sincerely believe all of the following:
“Stop signs are red”
“My evidence for believing that stop signs are red is that sighted people have told me so”
“Red things aren’t red just because people say they are. Rather they have some inherent property to do with the way light reflects off them which causes people to describe them as red”
“Some religious sects call certain signs red which everyone else calls orange, but these sects are wrong; the signs are in fact orange. Members of the sects have been brought up that way, and are sincere in their beliefs, but they are still wrong”
“If I were a member of such a sect, I’d probably also believe that the orange signs are red”
In short it seems possible to have a complete and sincere set of beliefs in “red” as an objective world property, without ever seeing red yourself or ever having any emotional response to it. I don’t immediately see why the same can’t be true for moral beliefs.
Fair enough.
It’s certainly true that there are two different things being done by psychopaths and nonpsychopaths in the original example, but it might be that both of those things count as genuine moral beliefs, just as the two different things involving a stop sign can count as genuine beliefs about color.
OTOH, it might be that only one of them counts as a genuine moral belief, just as only one of the things involving a stop sign counts as a genuine perception of color.
I accept externalism on the part of some possible sentient creatures, but I think this question with respect to humans is a question of definitions, and even if you decide on a definition it becomes an empirical question I don’t know the answer to.
Other: I was unable to resolve my thoughts on this at all, and am not sure to what degree hearing more arguments either way would help that.