It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn’t be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn’t?
Anyway, I don’t see this as a huge problem. First, I’m pretty sure I’m never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don’t see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences—which states of the world—are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn’t effective at producing morally better states of affairs, then I wouldn’t advocate acting in that manner.
But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.
Utilitarianism is not collectively self-defeating, but then there’d be no room for non-cosmopolitan moral ends.
(rather than idealized perfectly informed and perfectly rational super-beings)
This part shouldn’t make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed “indirectly individually self defeating”, if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn’t imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.
If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility.
This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?
Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don’t see why your utilitarianism isn’t collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.
It seems implausible to me that there is any ethical decision procedure that human beings (rather than idealized perfectly informed and perfectly rational super-beings) could follow that wouldn’t be collectively self-defeating in this sense. Do you (or Parfit) have an example of one that isn’t?
Anyway, I don’t see this as a huge problem. First, I’m pretty sure I’m never going to live in a world (or even a close approximation to one) where everyone adheres to my moral beliefs perfectly. So I don’t see why the state of such a world should be relevant to my moral beliefs. Second, my moral beliefs are ultimately beliefs about which consequences—which states of the world—are best, not beliefs about which actions are best. If there was good evidence that acting in a certain manner (in the aggregate) wasn’t effective at producing morally better states of affairs, then I wouldn’t advocate acting in that manner.
But I am not convinced that following a cosmopolitan decision procedure (or advocating that others follow one) would empirically be an effective means to achieving my decidedly non-cosmopolitan moral ends. Perhaps if everyone in the world mimicked my moral behavior (or did what I told them) it would be, but alas, that is not the case.
Utilitarianism is not collectively self-defeating, but then there’d be no room for non-cosmopolitan moral ends.
This part shouldn’t make a difference. If humans are too irrational to directly follow utilitarianism (U), then U implies they should come up with easier/less dangerous rules of thumb that will, on average, produce the most utility. This is termed “indirectly individually self defeating”, if you have a theory that implies it would be best to follow some other theory. Parfit concludes, and I agree with him here, that this is not a reason to reject U. U doesn’t imply that one ought to actively implement utilitarianism, it only wants you to bring about the best consequences regardless of how this happens.
This is a pretty dubious move. Why think there will be easy to follow rules that will maximize aggregate utility? And even if such rules exist, how would we go about discovering them, given that the reason we need them in the first place is due to our inability to fully predict the consequences of our actions and their attached utilities?
Do you just mean that we should pick easy to follow rules that tend to produce more utility than other sets of easy to follow rules (as far as we can figure out), but not necessarily ones that maximize utility relative to all possible patterns of behavior? In that case, I don’t see why your utilitarianism isn’t collectively self-defeating according to the definition you gave. A world in which everyone acts according to such rules will not be a world that is as close to the utilitarian Paradise as empirically possible. After all, it seems entirely empirically possible for people to accurately recognize particular situations where actions contrary to the rules produce higher utility.