This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.
Hi Kaj, thx for replying!
This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.