Utility arguments often include type errors via referring to contextual utility in one part of the argument and some sort of god’s eye contextless utility in other parts. Sometimes the ‘gotcha’ of the problem hinges on this.
In each case, the utilitarian wants to fix the issue by redrawing buckets around what counts as utility, what counts as actions, what counts as consequences, and the time binding/window on each of them. But these sort of ontological sidesteps prove too much. If taken as a general approach, rather than just as an ad hoc approach to solve any individual conundrum, it becomes obvious that it doesn’t specify anything about agents’ actions at all as discussed in Hoagy’s post: https://www.alignmentforum.org/posts/yGuo5R9fgrrFLYWuv/when-do-utility-functions-constrain-1
Another way to see it is as a kind of motte and bailey issue with domain/goal specific utility as the motte and god’s eye view as the bailey.
Through this lens it becomes obvious that a lot of population ethics problems, for instance, are just restatements of the sorites paradox or other such problems with continuums. You can also run this the other way and use ‘utility’ to turn any conflict in mathematical intuitions into a moral puzzle.
Utility arguments often include type errors via referring to contextual utility in one part of the argument and some sort of god’s eye contextless utility in other parts. Sometimes the ‘gotcha’ of the problem hinges on this.
Can you give an example?
Consider various utilitarian fixes to classic objections to it: https://www4.uwsp.edu/philosophy/dwarren/IntroBook/ValueTheory/Consequentialism/ActVsRule/FiveObjections.htm
In each case, the utilitarian wants to fix the issue by redrawing buckets around what counts as utility, what counts as actions, what counts as consequences, and the time binding/window on each of them. But these sort of ontological sidesteps prove too much. If taken as a general approach, rather than just as an ad hoc approach to solve any individual conundrum, it becomes obvious that it doesn’t specify anything about agents’ actions at all as discussed in Hoagy’s post: https://www.alignmentforum.org/posts/yGuo5R9fgrrFLYWuv/when-do-utility-functions-constrain-1
Another way to see it is as a kind of motte and bailey issue with domain/goal specific utility as the motte and god’s eye view as the bailey.
Through this lens it becomes obvious that a lot of population ethics problems, for instance, are just restatements of the sorites paradox or other such problems with continuums. You can also run this the other way and use ‘utility’ to turn any conflict in mathematical intuitions into a moral puzzle.