My guess is that the appropriate way to dissolve the conflict between utilitarian and deontological moral philosophy is to see deontological rules as heuristics. I think we could design an experiment in which utilitarians get emotional and inconsistent, and deontologists come off as the sober thinkers, just by making it a situation where adoption of a simple consistent heuristic is superior to the attempt to weigh up unknown probabilities and unknown bads.
(The example I’ve seen is “I wear the safety belt whenever I drive a car, because unthinkingly wearing a safety belt is even less expensive than thinking whether or not to wear it”.)
Do you mean consistent in the sense of choosing by a fixed criterion, like “no torturing people”, or choosing by a fixed criterion that is not exposed to certain losses in terms of the agent’s preferences given an adversary with identical knowledge, like “behavior positively correlated with the that of an agent who knows and shares your preferences and is able to conditionalize on evidence and decides to maximize its updateless expectation”?
If the latter, as I understood your comment upon first reading, that seems to be contradicted by the claims of Eli’s circular altruism post, though he provides no citations. Also, the post says nothing explicitly of whether people who call themselves utilitarians are better in practice of shutting up and multiplying, though I don’t see how having no verbal beliefs such as “you can’t put a price on life” would make one more likely to act as though human lives are incomparably valuable.
My guess is that the appropriate way to dissolve the conflict between utilitarian and deontological moral philosophy is to see deontological rules as heuristics. I think we could design an experiment in which utilitarians get emotional and inconsistent, and deontologists come off as the sober thinkers, just by making it a situation where adoption of a simple consistent heuristic is superior to the attempt to weigh up unknown probabilities and unknown bads.
(The example I’ve seen is “I wear the safety belt whenever I drive a car, because unthinkingly wearing a safety belt is even less expensive than thinking whether or not to wear it”.)
Do you mean consistent in the sense of choosing by a fixed criterion, like “no torturing people”, or choosing by a fixed criterion that is not exposed to certain losses in terms of the agent’s preferences given an adversary with identical knowledge, like “behavior positively correlated with the that of an agent who knows and shares your preferences and is able to conditionalize on evidence and decides to maximize its updateless expectation”?
If the latter, as I understood your comment upon first reading, that seems to be contradicted by the claims of Eli’s circular altruism post, though he provides no citations. Also, the post says nothing explicitly of whether people who call themselves utilitarians are better in practice of shutting up and multiplying, though I don’t see how having no verbal beliefs such as “you can’t put a price on life” would make one more likely to act as though human lives are incomparably valuable.