I suppose I’d agree with you that folk ethics aren’t exactly deontological, though I’d have trouble calling them virtue ethics since I don’t understand virtue ethics well enough to draw any predictive power out of it (and I’m not sure it’s supposed to have predictive power in moral dilemmas).
My understanding is that you can look at virtue ethics as consequentialism that incorporates some important insights from game theory and Newcomb-like problems in decision theory (i.e. those where agents have some ability to predict each others’ decisions). These concepts aren’t incorporated via explicit understanding, which is still far from complete, but by observing people’s actual intuitions and behaviors that were shaped by evolutionary processes (both biological and cultural), in which these game- and decision-theoretic issues have played a crucial role.
(Of course, such reduction to consequentialism is an arbitrary convention. You can reduce either consequentialism or deontology to each other just by defining the objective function or the deonotological rules suitably. I’m framing it that way just because you like consequentialism.)
Do you think this is specific to utilitarianism or more of a general issue with philosophy? David Hume didn’t seriously stock up on candles in case the sun didn’t rise the next morning, Objectivists probably do as many nice things for other people as anyone else, and economists don’t convert en masse even though most don’t have a good argument against stronger forms of Pascal’s Wager.
Of course it’s not specific to utilitarianism. It happens whenever some belief is fashionable and high-status but has seriously costly or inconvenient implications.
I generally agree with the rest of your comment. Ultimately, as long as we’re talking about what happens in the real physical world rather than metaphysics, our reasoning is in some reasonable sense consequentialist. (Though I wouldn’t go so far to say “utilitarian,” since this gets us into the problem of interpersonal utility comparison.)
I think the essence of our disagreements voiced in previous discussions is that I’m much more pessimistic about our present ability to subject our moral intuitions (as well as the existing social customs, norms, and institutions that follow from them) to general-purpose reasoning. Even many fairly simple problems in game and decision theory are still open, and the issues (most of which are deeply non-obvious) that come into play with human social interactions, let alone large-scale social organization, are hopelessly beyond our current understanding. At the same time, it’s hard to resist the siren call of plausible-sounding rationalizations for ideology and theories that are remote from reality but signal smarts and sophistication.
My understanding is that you can look at virtue ethics as consequentialism that incorporates some important insights from game theory and Newcomb-like problems in decision theory (i.e. those where agents have some ability to predict each others’ decisions). These concepts aren’t incorporated via explicit understanding, which is still far from complete, but by observing people’s actual intuitions and behaviors that were shaped by evolutionary processes (both biological and cultural), in which these game- and decision-theoretic issues have played a crucial role.
(Of course, such reduction to consequentialism is an arbitrary convention. You can reduce either consequentialism or deontology to each other just by defining the objective function or the deonotological rules suitably. I’m framing it that way just because you like consequentialism.)
Of course it’s not specific to utilitarianism. It happens whenever some belief is fashionable and high-status but has seriously costly or inconvenient implications.
I generally agree with the rest of your comment. Ultimately, as long as we’re talking about what happens in the real physical world rather than metaphysics, our reasoning is in some reasonable sense consequentialist. (Though I wouldn’t go so far to say “utilitarian,” since this gets us into the problem of interpersonal utility comparison.)
I think the essence of our disagreements voiced in previous discussions is that I’m much more pessimistic about our present ability to subject our moral intuitions (as well as the existing social customs, norms, and institutions that follow from them) to general-purpose reasoning. Even many fairly simple problems in game and decision theory are still open, and the issues (most of which are deeply non-obvious) that come into play with human social interactions, let alone large-scale social organization, are hopelessly beyond our current understanding. At the same time, it’s hard to resist the siren call of plausible-sounding rationalizations for ideology and theories that are remote from reality but signal smarts and sophistication.