I don’t think there’s any one “ideal consequentialist”. Some values may be more important to me than others, and I may want to self-modify to care less about some of those things in order to maximize the others, but no terminal values are themselves better or worse. My utility function is what it is.
The issue is that “freedom” and “challenge” aren’t really outcome preferences so much as they are action preferences or decision theory preferences. The consequentialist doesn’t see a difference between me proposing that we trade two of my apples for three of your lemons and a dictator ordering that we trade two of my apples for three of your lemons- the outcome is how many of which fruit each of us ends up with, and if the dictator is better at negotiating and knowing our preferences than we are, the consequentialist suggests that we use the dictator and get over our preference for freedom (which was useful when dictators were bad but isn’t useful when dictators are good).
You can smuggle other moral systems into consequentialism- by, say, including features of the decision tree as part of the outcome- but it’s far cleaner to just discard consequentialism.
I have different subjective experiences when I am making my own decisions and when I am doing something I was ordered to do, even if it’s the same decision and action both times. This suggests “my having freedom” is a real quality of a state of the world, and that therefore I can have consequentialist preferences about its presence vs. its absence. Anything that can distinguish two states of the world is a valid thing consequentialists can have values about.
In the trolley problem, an agent will have different subjective experiences in the case where they do nothing and in the case where they murder someone. Most consequentialist prescriptions count such preferences as insignificant in light of the other outcomes.
I do think that most consequentialists go further, claiming that only the final world state should matter and not how you got there, but I agree with you that consequentialist tools are powerful enough to adopt systems that are typically seen as competing- like deontology- and the reverse is true as well. Because there’s that flexibility in tools, I find conversations are easier if one adopts strict system definitions. If someone uses expected utility theory to pick an action, but their utility is based on the rules they followed in choosing actions, I don’t see the value in calling that consequentialism.
The consequentialist doesn’t see a difference between me proposing that we trade two of my apples for three of your lemons and a dictator ordering that we trade two of my apples for three of your lemons- the outcome is how many of which fruit each of us ends up with, and if the dictator is better at negotiating and knowing our preferences than we are, the consequentialist suggests that we use the dictator and get over our preference for freedom
I think your quandary can be resolved by dividing your example into more than one consequence.
Example 1 has the consequences:
Dictator tells you what to do.
You end up with +3 lemons and −2 apples.
Example 2 has the consequences:
You think hard and make a decision.
You end up with +3 lemons and −2 apples.
I’m making up numbers here, but imagine you assign +10 utility to the consequence “end up with +3 lemons and −2 apples,” +1 utility to the consequence “think hard and make a decision.” and −3 to the consequence “dictator tells me what to do.” Then in Example 1 the two consequences have a combined utility of 7, whereas in Example 2 they have a combined utility of 11.
In the trolley problem, an agent will have different subjective experiences in the case where they do nothing and in the case where they murder someone. Most consequentialist prescriptions count such preferences as insignificant in light of the other outcomes.
I think one reason that subjective experiences don’t matter in the trolly problem is that the stakes are so high. In the trolley problem your desire not to be involved in someone’s death is nothing compared to the desire of six people to not die. If the stakes were much lower, however, your subjective experiences might matter.
For instance, imagine a toned down trolley problem where if you do nothing Alice and Bob will get papercuts on their thumbs, and if you pull a switch Clyde will get a papercut on his thumb. In that case the stakes are low enough that the unpleasant feeling you get from pulling the switch and injuring someone might merit some consideration.
This is actually similar to how the preference for freedom is treated in real life. When the stakes are low freedom is respected more often, even if it sometimes leads to some bad consequences, but when they are higher (during war, viral epidemics, etc) freedom is restricted because the stakes are much higher. (Of course, it goes without saying that in real life treating freedom like this tends to encourage corruption)
I don’t think there’s any one “ideal consequentialist”. Some values may be more important to me than others, and I may want to self-modify to care less about some of those things in order to maximize the others, but no terminal values are themselves better or worse. My utility function is what it is.
The issue is that “freedom” and “challenge” aren’t really outcome preferences so much as they are action preferences or decision theory preferences. The consequentialist doesn’t see a difference between me proposing that we trade two of my apples for three of your lemons and a dictator ordering that we trade two of my apples for three of your lemons- the outcome is how many of which fruit each of us ends up with, and if the dictator is better at negotiating and knowing our preferences than we are, the consequentialist suggests that we use the dictator and get over our preference for freedom (which was useful when dictators were bad but isn’t useful when dictators are good).
You can smuggle other moral systems into consequentialism- by, say, including features of the decision tree as part of the outcome- but it’s far cleaner to just discard consequentialism.
I have different subjective experiences when I am making my own decisions and when I am doing something I was ordered to do, even if it’s the same decision and action both times. This suggests “my having freedom” is a real quality of a state of the world, and that therefore I can have consequentialist preferences about its presence vs. its absence. Anything that can distinguish two states of the world is a valid thing consequentialists can have values about.
In the trolley problem, an agent will have different subjective experiences in the case where they do nothing and in the case where they murder someone. Most consequentialist prescriptions count such preferences as insignificant in light of the other outcomes.
I do think that most consequentialists go further, claiming that only the final world state should matter and not how you got there, but I agree with you that consequentialist tools are powerful enough to adopt systems that are typically seen as competing- like deontology- and the reverse is true as well. Because there’s that flexibility in tools, I find conversations are easier if one adopts strict system definitions. If someone uses expected utility theory to pick an action, but their utility is based on the rules they followed in choosing actions, I don’t see the value in calling that consequentialism.
I think your quandary can be resolved by dividing your example into more than one consequence.
Example 1 has the consequences:
Dictator tells you what to do.
You end up with +3 lemons and −2 apples.
Example 2 has the consequences:
You think hard and make a decision.
You end up with +3 lemons and −2 apples.
I’m making up numbers here, but imagine you assign +10 utility to the consequence “end up with +3 lemons and −2 apples,” +1 utility to the consequence “think hard and make a decision.” and −3 to the consequence “dictator tells me what to do.” Then in Example 1 the two consequences have a combined utility of 7, whereas in Example 2 they have a combined utility of 11.
I think one reason that subjective experiences don’t matter in the trolly problem is that the stakes are so high. In the trolley problem your desire not to be involved in someone’s death is nothing compared to the desire of six people to not die. If the stakes were much lower, however, your subjective experiences might matter.
For instance, imagine a toned down trolley problem where if you do nothing Alice and Bob will get papercuts on their thumbs, and if you pull a switch Clyde will get a papercut on his thumb. In that case the stakes are low enough that the unpleasant feeling you get from pulling the switch and injuring someone might merit some consideration.
This is actually similar to how the preference for freedom is treated in real life. When the stakes are low freedom is respected more often, even if it sometimes leads to some bad consequences, but when they are higher (during war, viral epidemics, etc) freedom is restricted because the stakes are much higher. (Of course, it goes without saying that in real life treating freedom like this tends to encourage corruption)