One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty:
A popular response in the Effective Altruist community to problems that seem to involve something like dogmatism or ‘value dictatorship’—indeed, the response William MacAskill gave when Cowen himself made some of these points in an interview—is to invoke moral uncertainty. If your moral view faces challenges like these, you should downweigh your confidence in it; and then, if you place some weight on multiple moral views, you should somehow aggregate their recommendations, to reach an acceptable compromise between ethical outlooks.
Various theories of moral uncertainty exist, outlining how this aggregation works; but none of them actually escape the issue. The theories of moral uncertainty that Effective Altruists rely on are themselves frameworks for commensurating values and systematically ranking options, and (as such) they are also vulnerable to ‘value dictatorship’, where after some point the choices recommended by utilitarianism come to swamp the recommendations of other theories. In the literature, this phenomenon is well-known as ‘fanaticism’.[10]
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values.
One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty: