Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.
I’m with you here.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.
I’m with you here.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.