So, you think that the inconvenience of surgery are more significant than the inconvenience of requiring dialysis, because the inconvenience of surgery will be borne by you but the inconvenience of dialysis will be borne by a stranger.
I don’t see anything wrong with that morality, but it isn’t mainstream consequentialism to value oneself that much more highly than others. You also consider it moral to steal from strangers, if there was no chance of getting caught, or to perform any other action where the ratio of benefit to you to damage to strangers was at least as good as the ratio involved in the kidney calculation, right?
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
Has there been a physics breakthrough of which I am unaware? Is there a way to reduce entropy in an isolated system? Because once there isn’t enough delta-T left for any electron to change state, everything even remotely analogous to being alive will have stopped.
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
This depends on the your preferences and, as such, is not generally true of all consequentialist systems.
If you generalize consequentialism to mean ‘whatever supports your preferences’, then you’ve expanded it beyond an ethical system to include most decision-making systems. We’re not discussing consequentialism in the general sense, either.
Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers. That form of morality is exactly what would be used by Hostile AI, with a justification similar to “I wish to create as many replicating nanomachines as possible, therefore any action which produces fewer, like failing to consume refined materials such as ‘structural supports’, is immoral.” A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
I suppose it is technically possible to believe that donating one out of two functioning kidneys is a worse consequence than living with no functioning kidneys. Of course, since the major component of donating a kidney is the surgery, and a similar surgery is needed to receive a kidney, there is either a substantial weighting towards oneself, or one would not accept a donated kidney if suffering from total renal failure. (Any significant weighting towards oneself results in the act of returning excess change to be immoral in strict consequentialism, assuming that the benefit to oneself is precisely equal to the loss to a stranger).
Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.
I’m with you here.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.
So, you think that the inconvenience of surgery are more significant than the inconvenience of requiring dialysis, because the inconvenience of surgery will be borne by you but the inconvenience of dialysis will be borne by a stranger.
I don’t see anything wrong with that morality, but it isn’t mainstream consequentialism to value oneself that much more highly than others. You also consider it moral to steal from strangers, if there was no chance of getting caught, or to perform any other action where the ratio of benefit to you to damage to strangers was at least as good as the ratio involved in the kidney calculation, right?
I am fairly confident that you are mistaken about what mainstream consequentialism asserts, see wikipedia for instance.
I also think the original downvoting occurred not due to non-consequentialist thinking but due to the probably false claim that death is inevitable.
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
Has there been a physics breakthrough of which I am unaware? Is there a way to reduce entropy in an isolated system?
Because once there isn’t enough delta-T left for any electron to change state, everything even remotely analogous to being alive will have stopped.
This depends on the your preferences and, as such, is not generally true of all consequentialist systems.
If you generalize consequentialism to mean ‘whatever supports your preferences’, then you’ve expanded it beyond an ethical system to include most decision-making systems. We’re not discussing consequentialism in the general sense, either.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers. That form of morality is exactly what would be used by Hostile AI, with a justification similar to “I wish to create as many replicating nanomachines as possible, therefore any action which produces fewer, like failing to consume refined materials such as ‘structural supports’, is immoral.” A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
I suppose it is technically possible to believe that donating one out of two functioning kidneys is a worse consequence than living with no functioning kidneys. Of course, since the major component of donating a kidney is the surgery, and a similar surgery is needed to receive a kidney, there is either a substantial weighting towards oneself, or one would not accept a donated kidney if suffering from total renal failure. (Any significant weighting towards oneself results in the act of returning excess change to be immoral in strict consequentialism, assuming that the benefit to oneself is precisely equal to the loss to a stranger).
I’m with you here.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.