I agree with much of your analysis—particularly the analogy to geometry, where, as you point out, the heuristic works. That heuristic is also useful in science. And in other branches of philosophy besides ethics.
The puzzle this post presents is: why do we have a tendency to accept moral philosophies that do not fit all of our existing values? Why do we find it natural or attractive to simplify our moral intuitions?
Here’s my idea: we have a heuristic that in effect says, if many related beliefs or intuitions all fit a certain pattern or logical structure, but a few don’t, the ones that don’t fit are probably caused by cognitive errors and should be dropped and regenerated from the underlying pattern or structure.
Given that the heuristic is so useful in other areas, I have to ask, “How you can be sure it is wrong to use it in ethics?”
Also, I think that the intuitions that don’t fit into the logical structure or system are not usually discarded or replaced. Instead, I think they are often simply reclassified—if they can no longer be justified as intrinsic values, a just-so story is constructed to explain them as instrumental values, or perhaps as evo-psych artifacts of an ancestral environment which no longer applies.
You still agree that is it wrong for morals not to depend on numbers of people, you just propose a different alternate decision system. Eliezer’s example of willingness to donate to sick children in Israel could be included in some extremely convoluted but consistent decision process but, upon reflection, we find that this is not what we think is right.
Given that the heuristic is so useful in other areas, I have to ask, “How you can be sure it is wrong to use it in ethics?”
In fact, I’m not sure, but it does seem suspicious that different people, applying the same heuristic, can reach conclusions as different as utilitarianism and egoism. I guess I have another heuristic which says that in situations like this, don’t do anything irreversible (which adopting a moral system can be if it permanently changes your moral intuitions) until I have a better idea of what is going on.
I’m not sure there is all that much difference at the behavioral level between a utilitarian (who sometimes fails to act in accordance with his proclaimed values) and an egoist (who sometimes fails to notice that he could have ‘gotten away with it’).
I think your second heuristic is a good one, though I don’t personally know anyone who is so closed minded that they would be unable to undo the adoption of a moral system and the changed intuitions that came with it.
I think the more sophisticated versions of egoism and utilitarianism have a tendency to meet in the middle anyway.
Good egoists aren’t supposed to Prudently Predate (because of the effect it has on society—rather utilitarianish that).
Realistiic ultilitaians needn’t indulge in relentless self sacrifice.
I agree with much of your analysis—particularly the analogy to geometry, where, as you point out, the heuristic works. That heuristic is also useful in science. And in other branches of philosophy besides ethics.
Given that the heuristic is so useful in other areas, I have to ask, “How you can be sure it is wrong to use it in ethics?”
Also, I think that the intuitions that don’t fit into the logical structure or system are not usually discarded or replaced. Instead, I think they are often simply reclassified—if they can no longer be justified as intrinsic values, a just-so story is constructed to explain them as instrumental values, or perhaps as evo-psych artifacts of an ancestral environment which no longer applies.
There definitely are some intuitions that are wrong on reflection, like scope insensitivity.
I’m not so sure about that. See Shut Up and Divide?
You still agree that is it wrong for morals not to depend on numbers of people, you just propose a different alternate decision system. Eliezer’s example of willingness to donate to sick children in Israel could be included in some extremely convoluted but consistent decision process but, upon reflection, we find that this is not what we think is right.
In fact, I’m not sure, but it does seem suspicious that different people, applying the same heuristic, can reach conclusions as different as utilitarianism and egoism. I guess I have another heuristic which says that in situations like this, don’t do anything irreversible (which adopting a moral system can be if it permanently changes your moral intuitions) until I have a better idea of what is going on.
I’m not sure there is all that much difference at the behavioral level between a utilitarian (who sometimes fails to act in accordance with his proclaimed values) and an egoist (who sometimes fails to notice that he could have ‘gotten away with it’).
I think your second heuristic is a good one, though I don’t personally know anyone who is so closed minded that they would be unable to undo the adoption of a moral system and the changed intuitions that came with it.
I think the more sophisticated versions of egoism and utilitarianism have a tendency to meet in the middle anyway. Good egoists aren’t supposed to Prudently Predate (because of the effect it has on society—rather utilitarianish that). Realistiic ultilitaians needn’t indulge in relentless self sacrifice.