I haven’t finished reading your meta-ethics sequence, so I apologize in advance if this is something that you’ve already addressed, but just from this exchange, I’m wondering:
Suppose that instead of talking about humans and Babyeaters, we talk about groups of humans with equally strong feelings of morality but opposite ideas about it. Suppose we take one person who feels moral when saving a little girl from being murdered, and another person who feels moral when murdering a little girl as punishment for having being raped. This seems closely analogous to your “Morality is about how to save babies, not eat them, everyone knows that and they happen to be right.” It would sound just as reasonable to say that everybody knows that morality is about saving children rather than murdering them, but sadly, it’s not the case that “everybody knows” this: as you know, there are cultures existing right now where a girl would be put to death by honestly morally-outraged elders for the abominable sin of being raped, horrifying though this fact is.
So let’s take two people (or two larger groups of people, if you prefer) from each of these cultures. We could have them imagine these actions as intensely as possible, and scan their brains for relevant electrical and chemical information, find out what parts of the brain are being used and what kinds of emotions are active. (If a control is needed, we could scan the brain of someone intensely imagining some action everyone would consider irrelevant to morality, such as brushing one’s teeth. I don’t think there are any cultures that deem that evil, are there?) If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word “morality” with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling? Or would you conclude that this is a situation where two people are talking about the same subject matter but have drastically opposing ideas about it?
If the latter is the case, then I do think I get the point of the Babyeater thought experiments: although they appear to us to have some mechanism of making moral judgments (judgments that we find horrible), this mechanism serves different cognitive functions for them than our moral intuition does for us, and it originated in them for different reasons. Therefore, they cannot be reasonably considered to be differently-calibrated versions of the same feature. Is that right?
If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word “morality” with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling?
Depends. If the child-murderer knew everything about the true state of affairs and everything about the workings of their own inner mind, would they still disagree with the child-rescuer? If so, then it’s pretty futile to pretend that they’re talking about the same subject matter when they talk about that-which-makes-me-experience-a-feeling-of-being-justified. It would be like if one species of aliens saw green when contemplating real numbers and another species of aliens saw green when contemplating ordinals; attempts to discuss that-which-makes-me-see-green as if it were the same mathematical subject matter are doomed to chaos. By the way, it looks to me like a strong possibility is that reasonable methods of extrapolating volitions will give you a spread of extrapolated-child-murderers some of which are perfectly selfish hedonists, some of which are child-rescuers, and some of which are Babyeaters.
And yes, this was the approximate point of the Babyeater thought experiment.
I haven’t finished reading your meta-ethics sequence, so I apologize in advance if this is something that you’ve already addressed, but just from this exchange, I’m wondering:
Suppose that instead of talking about humans and Babyeaters, we talk about groups of humans with equally strong feelings of morality but opposite ideas about it. Suppose we take one person who feels moral when saving a little girl from being murdered, and another person who feels moral when murdering a little girl as punishment for having being raped. This seems closely analogous to your “Morality is about how to save babies, not eat them, everyone knows that and they happen to be right.” It would sound just as reasonable to say that everybody knows that morality is about saving children rather than murdering them, but sadly, it’s not the case that “everybody knows” this: as you know, there are cultures existing right now where a girl would be put to death by honestly morally-outraged elders for the abominable sin of being raped, horrifying though this fact is.
So let’s take two people (or two larger groups of people, if you prefer) from each of these cultures. We could have them imagine these actions as intensely as possible, and scan their brains for relevant electrical and chemical information, find out what parts of the brain are being used and what kinds of emotions are active. (If a control is needed, we could scan the brain of someone intensely imagining some action everyone would consider irrelevant to morality, such as brushing one’s teeth. I don’t think there are any cultures that deem that evil, are there?) If the child-rescuer and child-murderer seem to be feeling the same emotions, having the same experience of righteousness, when imagining their opposite acts, would you still conclude that it is a mistranslation/misuse to identify our word “morality” with whatever word the righteous-feeling child-murderer is using for what appears to be the same feeling? Or would you conclude that this is a situation where two people are talking about the same subject matter but have drastically opposing ideas about it?
If the latter is the case, then I do think I get the point of the Babyeater thought experiments: although they appear to us to have some mechanism of making moral judgments (judgments that we find horrible), this mechanism serves different cognitive functions for them than our moral intuition does for us, and it originated in them for different reasons. Therefore, they cannot be reasonably considered to be differently-calibrated versions of the same feature. Is that right?
Depends. If the child-murderer knew everything about the true state of affairs and everything about the workings of their own inner mind, would they still disagree with the child-rescuer? If so, then it’s pretty futile to pretend that they’re talking about the same subject matter when they talk about that-which-makes-me-experience-a-feeling-of-being-justified. It would be like if one species of aliens saw green when contemplating real numbers and another species of aliens saw green when contemplating ordinals; attempts to discuss that-which-makes-me-see-green as if it were the same mathematical subject matter are doomed to chaos. By the way, it looks to me like a strong possibility is that reasonable methods of extrapolating volitions will give you a spread of extrapolated-child-murderers some of which are perfectly selfish hedonists, some of which are child-rescuers, and some of which are Babyeaters.
And yes, this was the approximate point of the Babyeater thought experiment.