I am not sure what you mean by “not even wrong”. My interpretation of consequentialism is just following an algorithm which is designed to maximize a certain utility function. Of course you can get different kinds of consequentialism by using different utility functions. In what sense is it “not even wrong”? Or do you consider a narrower definition of consequentialism?
“Not Even Wrong”: an idea so incredibly ill-founded that it can’t be tested, because it’s wrong in the presuppositions it necessitates and admits, and wrong in its definitions as well.
“2 + 2 = 3” is Wrong. “The sky is made of music” is Not Even Wrong.
Well, I’m obviously not Kaj, but I do think that consequentialism is maximizing a utility function over world-states. You could say that deontology, then, is having a preference ordering or utility function over actions your algorithm outputs, with little or no regard for the world-states those actions make likely. Virtue-ethics, then, could be taken as a preference ordering over kinds of people one can be, choosing actions based on which Kind of Person those actions provide evidence for your being (which basically makes it the Evidential Decision Theory weirdo of the bunch).
One way consequentialism could be Not Even Wrong is if we evaluate utility over world-lines, with the entire causal history and world-state both contributing as input variables to the preference function.
Well, I would describe the scenario you suggest as “consequentialism is wrong” rather than “consequentialism is not even wrong”. Moreover, I don’t see what it has to do with whatever the Greeks or Bentham or whoever meant when they wrote something.
I didn’t answer this at first because I had difficulties putting my intuition to words. But here’s a stab at it:
Suppose that at first, people believe that there is a God who has defined some things as sinful and others as non-sinful. And they go about asking questions like, “is brushing my teeth sinful or not”, and this makes sense given their general set of beliefs. And a theologician could give a “yes” or “no” answer to that, which could be logically justified if you assumed some specific theology.
Then they learn that there is actually no God, but they still go about asking “is brushing my teeth sinful or not”. And this no longer makes sense even as a question, because the definition of “sin” came from a specific theology which assumed the existence of God. And then a claim like “here’s a theory which shows that brushing teeth is always sinful” would not even be wrong, because it wasn’t making claims about any coherent concept.
Now consequentialists might say that “consequentialism is the right morality everyone should follow”, but under this interpretation this wouldn’t be any different from saying that “consequentialism is the right theory about what is sinful or not”.
This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.
I am not sure what you mean by “not even wrong”. My interpretation of consequentialism is just following an algorithm which is designed to maximize a certain utility function. Of course you can get different kinds of consequentialism by using different utility functions. In what sense is it “not even wrong”? Or do you consider a narrower definition of consequentialism?
“Not Even Wrong”: an idea so incredibly ill-founded that it can’t be tested, because it’s wrong in the presuppositions it necessitates and admits, and wrong in its definitions as well.
“2 + 2 = 3” is Wrong. “The sky is made of music” is Not Even Wrong.
Hi Eli. I understand the meaning of the phrase “not even wrong”, I don’t understand its application in this particular context.
Well, I’m obviously not Kaj, but I do think that consequentialism is maximizing a utility function over world-states. You could say that deontology, then, is having a preference ordering or utility function over actions your algorithm outputs, with little or no regard for the world-states those actions make likely. Virtue-ethics, then, could be taken as a preference ordering over kinds of people one can be, choosing actions based on which Kind of Person those actions provide evidence for your being (which basically makes it the Evidential Decision Theory weirdo of the bunch).
One way consequentialism could be Not Even Wrong is if we evaluate utility over world-lines, with the entire causal history and world-state both contributing as input variables to the preference function.
Well, I would describe the scenario you suggest as “consequentialism is wrong” rather than “consequentialism is not even wrong”. Moreover, I don’t see what it has to do with whatever the Greeks or Bentham or whoever meant when they wrote something.
Fair enough, then.
I didn’t answer this at first because I had difficulties putting my intuition to words. But here’s a stab at it:
Suppose that at first, people believe that there is a God who has defined some things as sinful and others as non-sinful. And they go about asking questions like, “is brushing my teeth sinful or not”, and this makes sense given their general set of beliefs. And a theologician could give a “yes” or “no” answer to that, which could be logically justified if you assumed some specific theology.
Then they learn that there is actually no God, but they still go about asking “is brushing my teeth sinful or not”. And this no longer makes sense even as a question, because the definition of “sin” came from a specific theology which assumed the existence of God. And then a claim like “here’s a theory which shows that brushing teeth is always sinful” would not even be wrong, because it wasn’t making claims about any coherent concept.
Now consequentialists might say that “consequentialism is the right morality everyone should follow”, but under this interpretation this wouldn’t be any different from saying that “consequentialism is the right theory about what is sinful or not”.
Hi Kaj, thx for replying!
This makes sense as a criticism of versions of consequentialism which assume a “cosmic objective utility function”. I prefer the version of consequentialism in which the utility function is a property of your brain (a representation of your preferences). In this version there is no “right morality everyone should follow” since each person has a slightly different utility function. Moreover, I clearly want other people to maximize my own utility function (so that my utility function gets maximized) but this is the only sense in which that is “right”. Also, in contexts in which the difference between our utility functions is negligible (or we agreed to use an average utility function of some sort by bargaining) we sort of have a single morality that we follow although there is no “cosmic should” here, we’re just doing the thing that is rational given our preferences.
The “preferences version” of consequentialism is also what I prefer. I’ve never understood the (unfortunately much more common) “cosmic objective utility function” consequentialism which, among other things, doesn’t account for nearly enough of the variability in preferences among different types of brains.