I didn’t answer this at first because I had difficulties putting my intuition to words. But here’s a stab at it:
Suppose that at first, people believe that there is a God who has defined some things as sinful and others as non-sinful. And they go about asking questions like, “is brushing my teeth sinful or not”, and this makes sense given their general set of beliefs. And a theologician could give a “yes” or “no” answer to that, which could be logically justified if you assumed some specific theology.
Then they learn that there is actually no God, but they still go about asking “is brushing my teeth sinful or not”. And this no longer makes sense even as a question, because the definition of “sin” came from a specific theology which assumed the existence of God. And then a claim like “here’s a theory which shows that brushing teeth is always sinful” would not even be wrong, because it wasn’t making claims about any coherent concept.
Now consequentialists might say that “consequentialism is the right morality everyone should follow”, but under this interpretation this wouldn’t be any different from saying that “consequentialism is the right theory about what is sinful or not”.
This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.
I didn’t answer this at first because I had difficulties putting my intuition to words. But here’s a stab at it:
Suppose that at first, people believe that there is a God who has defined some things as sinful and others as non-sinful. And they go about asking questions like, “is brushing my teeth sinful or not”, and this makes sense given their general set of beliefs. And a theologician could give a “yes” or “no” answer to that, which could be logically justified if you assumed some specific theology.
Then they learn that there is actually no God, but they still go about asking “is brushing my teeth sinful or not”. And this no longer makes sense even as a question, because the definition of “sin” came from a specific theology which assumed the existence of God. And then a claim like “here’s a theory which shows that brushing teeth is always sinful” would not even be wrong, because it wasn’t making claims about any coherent concept.
Now consequentialists might say that “consequentialism is the right morality everyone should follow”, but under this interpretation this wouldn’t be any different from saying that “consequentialism is the right theory about what is sinful or not”.
Hi Kaj, thx for replying!
This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.