Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don’t know.)
Not explicitly as an axiom AFAIK, but if you’re valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say “I prefer state X to Z, and don’t express a preference between X and Y, or between Y and Z.” I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don’t evaluate consistently.
It seems like even with partial preferences, one can be consequentialist—if you don’t have clear preferences between outcomes, you have a choice that isn’t morally relevant. Or is there a self-contradiction lurking?
Suppose I say “I prefer state X to Z, and don’t express a preference between X and Y, or between Y and Z.” I am not saying that X and Y are equivalent; I am merely refusing to judge.
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
I think I see the point about dynamic inconsistency. It might be that “I got to state Y from Z” will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It’s a little weird, but I’m not sure it’s actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it’s not like history-dependence is a new strange thing.
You could have undefined value, but it’s not particularly intuitive, and I don’t think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it’s quite likely that humans violate the VNM model of rationality, but that could just be because we’re not rational.
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses.
Most people do have this belief. I think it’s a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it’s stipulated that the person is being confronted with a choice—in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say “no matter what you do, you are blameworthy.”
One way to fight the hypothetical of the trolley problem is to say “people are rarely confronted with this sort of moral dilemma involuntarily, and it’s evil to to put yourself in a position of choosing between evils.” I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
Not explicitly (except in the case of some utilitarians), but I don’t think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you’d think, but consequentialism is already sort of metaethical. The VMN theorem isn’t explicitly discussed that often (many ethicists won’t have heard of it), but the axioms are fairly intuitive anyway. However, although I don’t know enough about weird forms of consequentialism to know if anyone’s made a point of denying completeness, I wouldn’t be that surprised if that position exists.
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Yes, I think it certainly exists. I’m not sure if it’s universal or not, but I haven’t read a great deal on the subject yet, you I’m not sure if I would know.
Do consequentialists generally hold as axiomatic that there must be a morally preferable choice (or conceivably multiple equally preferable choices) in a given situation? If so, could somebody point me to a deeper discussion of this axiom (it probably has a name, which I don’t know.)
Not explicitly as an axiom AFAIK, but if you’re valuing states-of-the-world, any choice you make will lead to some state, which means that unless your valuation is circular, the answer is yes.
Basically, as long as your valuation is VNM-rational, definitely yes. Utilitarians are a special case of this, and I think most consequentialists would adhere to that also.
What happens if my valuation is noncircular, but is incomplete? What if I only have a partial order over states of the world? Suppose I say “I prefer state X to Z, and don’t express a preference between X and Y, or between Y and Z.” I am not saying that X and Y are equivalent; I am merely refusing to judge.
My impression is that real human preference routinely looks like this; there are lots of cases people refuse to evaluate or don’t evaluate consistently.
It seems like even with partial preferences, one can be consequentialist—if you don’t have clear preferences between outcomes, you have a choice that isn’t morally relevant. Or is there a self-contradiction lurking?
If the result of that partial preference is that you start with Z and then decline the sequence of trades Z->Y->X, then you got dutch booked.
Otoh, maybe you want to accept the sequence Z->Y->X if you expect both trades to be offered, but decline each in isolation? But then your decision procedure is dynamically inconsistent: Standing at Z and expecting both trade offers, you have to precommit to using a different algorithm to evaluate the Y->X trade than you will want to use once you have Y.
I think I see the point about dynamic inconsistency. It might be that “I got to state Y from Z” will alter my decisionmaking about Y versus X.
I suppose it means that my decision of what to do in state Y no longer depends purely on consequences, but also on history, at which point they revoke my consequentialist party membership.
But why is that so terrible? It’s a little weird, but I’m not sure it’s actually inconsistent or violates any of my moral beliefs. I have all sorts of moral beliefs about ownership and rights that are history-dependent so it’s not like history-dependence is a new strange thing.
You could have undefined value, but it’s not particularly intuitive, and I don’t think anyone actually advocates it as a component of a consequentialist theory.
Whether, in real life, people actually do it is a different story. I mean, it’s quite likely that humans violate the VNM model of rationality, but that could just be because we’re not rational.
Thanks! Do consequentialist kind of port the first axiom (completeness) from the VN-M utility theorem, changing it from decision theory to meta-ethics?
And for others, to put my original question another way: before we start comparing utilons or utility functions, insofar as consequentialists begin with moral intuitions and reason the existence of utility, is one of their starting intuitions that all moral questions have correct answers? Or am I just making this up? And has anybody written about this?
To put that in one popular context: in the Trolley Switch and Fat Man problem, it seems like most people start with the assumption that there exists a right answer (or preferable, or best, whatever your terminology), and that it could never be the case that an agent will do the wrong/immoral/unethical thing no matter what he or she chooses. Am I right that this assumption exists?
Most people do have this belief. I think it’s a safe one, though. It follows from a substantive belief most people have, which is that agents are only morally responsible for things that are under their control.
In the context of a trolley problem, it’s stipulated that the person is being confronted with a choice—in the context of the problem, they have to choose. And so it would be blaming them for something beyond their control to say “no matter what you do, you are blameworthy.”
One way to fight the hypothetical of the trolley problem is to say “people are rarely confronted with this sort of moral dilemma involuntarily, and it’s evil to to put yourself in a position of choosing between evils.” I suppose for consistency, if you say this, you should avoid jury service, voting, or political office.
Not explicitly (except in the case of some utilitarians), but I don’t think many would deny it. The boundaries between meta-ethics and normative ethics are vaguer than you’d think, but consequentialism is already sort of metaethical. The VMN theorem isn’t explicitly discussed that often (many ethicists won’t have heard of it), but the axioms are fairly intuitive anyway. However, although I don’t know enough about weird forms of consequentialism to know if anyone’s made a point of denying completeness, I wouldn’t be that surprised if that position exists.
Yes, I think it certainly exists. I’m not sure if it’s universal or not, but I haven’t read a great deal on the subject yet, you I’m not sure if I would know.