There’s a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let’s adopt moral uncertainty over all of the philosophically difficult premises too, so let’s say there’s only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.
...and probably the math still works out very unfavorably.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can’t draw many deep abiding conclusions about human empathy or the “worthiness” of human civilization (whatever that really means) from how we treat fish
One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty:
A popular response in the Effective Altruist community to problems that seem to involve something like dogmatism or ‘value dictatorship’—indeed, the response William MacAskill gave when Cowen himself made some of these points in an interview—is to invoke moral uncertainty. If your moral view faces challenges like these, you should downweigh your confidence in it; and then, if you place some weight on multiple moral views, you should somehow aggregate their recommendations, to reach an acceptable compromise between ethical outlooks.
Various theories of moral uncertainty exist, outlining how this aggregation works; but none of them actually escape the issue. The theories of moral uncertainty that Effective Altruists rely on are themselves frameworks for commensurating values and systematically ranking options, and (as such) they are also vulnerable to ‘value dictatorship’, where after some point the choices recommended by utilitarianism come to swamp the recommendations of other theories. In the literature, this phenomenon is well-known as ‘fanaticism’.[10]
Once you let utilitarian calculations into your moral theory at all, there is no principled way to prevent them from swallowing everything else. And, in turn, there’s no way to have these calculations swallow everything without them leading to pretty absurd results. While some of you might bite the bullet on the repugnant conclusion or the experience machine, it is very likely that you will eventually find a bullet that you don’t want to bite, and you will want to get off the train to crazy town; but you cannot consistently do this without giving up the idea that scale matters, and that it doesn’t just stop mattering after some point.
So, what other options are there? Well, this is where Cowen’s paper comes in: it turns out, there are none. For any moral theory with universal domain where utility matters at all, either the marginal value of utility diminishes rapidly (asymptotically) towards zero, or considerations of utility come to swamp all other values.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
I feel like this decision procedure is difficult but necessary, in that I can’t think of any other decision procedure you can follow that won’t cause you to pass up on enormous amounts of utility, cause you to violate lots of deontological constraints, or whatever you decide morality is made of on reflection. Surely if you actually think some consideration is 1% to be true, you should act on it?
There’s a trollish answer to this point (that I somewhat agree with) which is to just say: okay, let’s adopt moral uncertainty over all of the philosophically difficult premises too, so let’s say there’s only a 1% chance that raw intensity of pain matters and 99% that you need to be self reflective in certain ways to have qualia and suffer in a way that matters morally, or you should treat it as scaling with cortical neurons, or only humans matter.
...and probably the math still works out very unfavorably.
I say trollish because a decision procedure like this strikes me as likely to swamp and overwhelm you with way too many different considerations pointing in all sorts of crazy directions and to be just generally unworkable so I feel like something has to be going wrong here.
Still, I do feel like the fact that the answer is non-obvious in this way and does rely on philosophical reflection means you can’t draw many deep abiding conclusions about human empathy or the “worthiness” of human civilization (whatever that really means) from how we treat fish
One objection to this method of dealing with moral uncertainty comes from this great post on the EA forum: it covers an old paper by Tyler Cowen which argues that once you give any consideration to utilitarianism, it’s common knowledge that you’re susceptible to moral dilemmas like the repugnant conclusion, and (here comes the interesting claim) there’s no escape from this, including by invoking moral uncertainty:
I feel like this decision procedure is difficult but necessary, in that I can’t think of any other decision procedure you can follow that won’t cause you to pass up on enormous amounts of utility, cause you to violate lots of deontological constraints, or whatever you decide morality is made of on reflection. Surely if you actually think some consideration is 1% to be true, you should act on it?