That’s a confusion. I was explicitly talking of “moral” circuits.
Well, that presupposes that we have some ability to distinguish between moral circuits and other circuits. To do that, you need some other criteria for what morality consists in than evolutionary imperatives, b/c all brain connections are at least partially caused by evolution. Ask yourself: what decision procedure would I articulate to justify to Eisegetes that the circuits responsible for regulating blinking, for creating feelings of hunger, or giving rise to sexual desire are, or are not, “moral circuits.”
In other words, you will always be faced with the problem of showing a particular brain circuit X, which you call a “moral circuit,” and having someone say, “the behavior that circuit controls/compels/mediates is not something I would describe as moral.” In order to justify your claim that there are moral circuits, or that specific circuits relate to morality, you need an exogenous conception of what morality is. Or else your definitions of morality will necessarily encompass a lot of brain circuitry that very few people would call “moral.”
It’s Euthyphro, all over again, but with brains.
I could make your brain’s implicit ordering of moral options explicit with a simple algorithm:
1. Ask for the most moral option.
2. Exclude it from the set of options.
3. While options left, goto 1.
Well, I was trying to say that I don’t think we have preferences that finely-grained. To wit:
Rank the following options in order of moral preference:
1. Kill one Ugandan child, at random.
2. Kill one South African child, at random.
3. Kill one Thai child. You have to torture him horribly for three days before he dies, but his death will make the lives of his siblings better.
3.5 Kill two Thai children, in order to get money with which to treat your sick spouse.
4. Rape and murder ten children, but also donate $500 million to a charity which fights AIDS in Africa.
5. Rape 500 children.
6. Sexually molest (short of rape) 2,000 children.
7. Rape 2000 women and men.
8. Rape 4000 convicted criminals.
9. Execute 40,000 convicted criminals per year in a system with a significant, but unknowable, error rate.
10. Start a war that may, or may not, make many millions of people safer, but will certainly cause at least 100,000 excess deaths.
The problem becomes that the devil is in the details. It would be very hard to determine, as between many of these examples, which is “better” or “worse”, or which is “more moral” or “less moral.” Even strict utilitarians would get into trouble, because they would experience such uncertainty trying to articulate the consequences of each scenario. Honestly, I think many people, if forced, could put them in some order, but they would view that order as very arbitrary, and not necessarily something that expressed any “truth” about morality. Pressed, they would be reluctant to defend it.
Hence, I said above that people are probably indifferent between many choices in terms of whether they are “more moral” or “less moral.” They won’t necessarily have a preference ordering between many options, viewing them as equivalently heinous or virtuous. This makes sense if you view “moral circuitry” as made up of gradated feelings of shame/disgust/approval/pleasure. Our brain states are quantized and finite, so there are certainly a finite number of “levels” of shame or disgust that I can experience. Thus, necessarily, many states of affairs in the world will trigger those responses to an identical degree. This is the biological basis for ethical equivalence—if two different actions produce the same response from my ethical circuitry, how can I say meaningfully that I view one or the other as more or less “moral?”
To be sure, we can disagree on how many levels of response there are. I would tend to think the number of ethical responses we can have is quite small—we can clearly say that murder is usually worse than rape, for instance. But we have great difficulty saying whether raping a 34 year old is better or worse than raping a 35 year old. You might think that enough reflection would produce a stable preference order between those states every time. But if we make the difference between their ages something on the order of a second, I don’t see how you could seriously maintain that you experience a moral preference.
That’s a confusion. I was explicitly talking of “moral” circuits.
Well, that presupposes that we have some ability to distinguish between moral circuits and other circuits. To do that, you need some other criteria for what morality consists in than evolutionary imperatives, b/c all brain connections are at least partially caused by evolution. Ask yourself: what decision procedure would I articulate to justify to Eisegetes that the circuits responsible for regulating blinking, for creating feelings of hunger, or giving rise to sexual desire are, or are not, “moral circuits.”
In other words, you will always be faced with the problem of showing a particular brain circuit X, which you call a “moral circuit,” and having someone say, “the behavior that circuit controls/compels/mediates is not something I would describe as moral.” In order to justify your claim that there are moral circuits, or that specific circuits relate to morality, you need an exogenous conception of what morality is. Or else your definitions of morality will necessarily encompass a lot of brain circuitry that very few people would call “moral.”
It’s Euthyphro, all over again, but with brains.
I could make your brain’s implicit ordering of moral options explicit with a simple algorithm:
1. Ask for the most moral option.
2. Exclude it from the set of options.
3. While options left, goto 1.
Well, I was trying to say that I don’t think we have preferences that finely-grained. To wit:
Rank the following options in order of moral preference:
1. Kill one Ugandan child, at random.
2. Kill one South African child, at random.
3. Kill one Thai child. You have to torture him horribly for three days before he dies, but his death will make the lives of his siblings better.
3.5 Kill two Thai children, in order to get money with which to treat your sick spouse.
4. Rape and murder ten children, but also donate $500 million to a charity which fights AIDS in Africa.
5. Rape 500 children.
6. Sexually molest (short of rape) 2,000 children.
7. Rape 2000 women and men.
8. Rape 4000 convicted criminals.
9. Execute 40,000 convicted criminals per year in a system with a significant, but unknowable, error rate.
10. Start a war that may, or may not, make many millions of people safer, but will certainly cause at least 100,000 excess deaths.
The problem becomes that the devil is in the details. It would be very hard to determine, as between many of these examples, which is “better” or “worse”, or which is “more moral” or “less moral.” Even strict utilitarians would get into trouble, because they would experience such uncertainty trying to articulate the consequences of each scenario. Honestly, I think many people, if forced, could put them in some order, but they would view that order as very arbitrary, and not necessarily something that expressed any “truth” about morality. Pressed, they would be reluctant to defend it.
Hence, I said above that people are probably indifferent between many choices in terms of whether they are “more moral” or “less moral.” They won’t necessarily have a preference ordering between many options, viewing them as equivalently heinous or virtuous. This makes sense if you view “moral circuitry” as made up of gradated feelings of shame/disgust/approval/pleasure. Our brain states are quantized and finite, so there are certainly a finite number of “levels” of shame or disgust that I can experience. Thus, necessarily, many states of affairs in the world will trigger those responses to an identical degree. This is the biological basis for ethical equivalence—if two different actions produce the same response from my ethical circuitry, how can I say meaningfully that I view one or the other as more or less “moral?”
To be sure, we can disagree on how many levels of response there are. I would tend to think the number of ethical responses we can have is quite small—we can clearly say that murder is usually worse than rape, for instance. But we have great difficulty saying whether raping a 34 year old is better or worse than raping a 35 year old. You might think that enough reflection would produce a stable preference order between those states every time. But if we make the difference between their ages something on the order of a second, I don’t see how you could seriously maintain that you experience a moral preference.