It’s where the legal issues start, certainly. But I would argue that ethically, what matters is how easily any of those 91,012 lives could have been saved. And many could have been saved very easily with malaria nets.
I’m pretty sure that it’s a hell of a lot easier to avoid shooting at people in a cinema than to earn enough money for the AMF to save a dozen lives. I do the former all the time—in fact, I’m doing that right now as I’m typing.
It’s much easier to directly save one person from malaria, than to save one person from a mad gunman. Not just on the societal level, as RobertLumley stated, but as simple individual actions.
I guess I misunderstood his point, then. I took “ethically, what matters” to mean ‘what matters to the question how bad a guy the gunman was, compared to how bad a guy or gal the typical person is’. There was an action X the gunman could have done such that, if counterfactually that day the gunman had done X instead of what he actually did, at the end of the day there would have been 12 fewer dead people—namely, staying out of the cinema. There was no such obvious action in my case—at least, none which wouldn’t have left me in several thousand dollars of debt.
Hmm, I’m guessing that it is because yours and EY’s stance (if I understand it right) is along the lines of “every life is sacred, every life is great” and is a common sentiment on LW; and that’s why your comment was upvoted and mine downvoted (probably misunderstood as “he has no valid argument to offer and so disguised this fact by tapping out”). Again, this is only a guess.
I certainly don’t think consciously, or act as if, “every life is sacred, every life is great”.
Nevertheless the people I care about personally, and myself, are still far more likely to die from some disease that is curable but is not eradicated due to lack of funds—including most or all causes of natural death—than due to the actions of madmen, gunmen, evil biology professors, or their tiny intersection.
Which is why when I read news like in this post, I think: “why am I wasting my time thinking about this?”
Death coming sooner is, in itself, no more or less a moral issue than a train leaving the station early, before all the ticketed passengers have boarded.
The immoral act was shooting them, not failing to give them mosquito nets.
In case you’re wondering why everyone is downvoting you, it’s because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don’t think there’s much of a difference between killing someone and letting them die. See this fantastic essay on the topic.
(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you’ll get a more nuanced view.)
I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I’ll leave what I wrote above there for reference of people who don’t know.
No problem. You clearly communicated what you intended to, which is never a problem.
From the link, though:
3.3: What do you mean by a desire to avoid guilt?
Suppose an evil king decides to do a twisted moral experiment on you. He tells you to kick a small child really hard, right in the face. If you do, he will end the experiment with no further damage. If you refuse, he will kick the child himself, and then execute that child plus a hundred innocent people.
The best solution is to somehow overthrow the king or escape the experiment. Assuming you can’t, what do you do?
‘Die trying’, is one moral answer. ‘Gain permission from the child’ is another. ‘Perform an immoral act for the greater good’ is a third answer. I choose not to make the claim “In some cases you should non-consensually kick a small child in the face because hurting people is bad.”
‘Die trying’ doesn’t save the 101 people. If anything, I’d think about the TDT-related benefits of having precommitted to not giving in to blackmail, but in this particular example it’s far from clear that the king wouldn’t have offered you the deal in the first place had he been sure you were going to refuse it—though it is in most similar situations I’m actually likely to face in real life.
Two of the three actions I suggested saved 102 people (assuming that you aren’t one of the 100 innocent people). Two of them are possible in the least convenient universe. Two of them are moral. Those three tradeoffs are the only ones I considered- or do you consider kicking a child in the face to be a moral act?
There is no benefit to committing to not give into blackmail in this case, except that it might reduce the chances of the scenario happening. One of the advantages to noncompliance is that it reduces the chance of the scenario recurring- can you be blackmailed into kicking any number of children with the same 100 hostages?
I’m observing that most of them considers themselves an immoral hypocrite. Exceptions apply to people medically unsuitable for kidney donation.
The same observation applies to most consequentialists with two typical lungs, and to those who have not agreed to donate their organs postmortem.
From a strict consequentialist viewpoint, it is a moral imperative to have kidneys forcibly harvested (or harvested under threat of imprisonment) from people who are suitable and provided to people without functioning kidneys. (Assuming that the harm of having a kidney forcibly harvested, or harvested under the threat of imprisonment, isn’t on the same scale as the benefit of having a functional kidney.)
The fact that the conclusions which are necessarily drawn from the premises of consequentialism are absurd is what discredits consequentialism as the primary driver of moral decisions. Anyone who considers themselves moral, but has all of their original organs in good working order, agrees that the primary driver of morality is something other than general consequentialism.
I’m observing that most of them considers themselves an immoral hypocrite.
No, they don’t.
I like my kidney. I value my kidney more than I value someone else having my kidney unless they are a relative or close friend. I have impolite words to say to anyone who dares suggest that me keeping my kidney is immoral.
The fact that the conclusions which are necessarily drawn from the premises of consequentialism are absurd is what discredits consequentialism as the primary driver of moral decisions.
You don’t understand consequentialism. The straw man you criticism more closely resembles some form of utilitarianism—which I do reject as abhorrent.
So, you think that the inconvenience of surgery are more significant than the inconvenience of requiring dialysis, because the inconvenience of surgery will be borne by you but the inconvenience of dialysis will be borne by a stranger.
I don’t see anything wrong with that morality, but it isn’t mainstream consequentialism to value oneself that much more highly than others. You also consider it moral to steal from strangers, if there was no chance of getting caught, or to perform any other action where the ratio of benefit to you to damage to strangers was at least as good as the ratio involved in the kidney calculation, right?
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
Has there been a physics breakthrough of which I am unaware? Is there a way to reduce entropy in an isolated system? Because once there isn’t enough delta-T left for any electron to change state, everything even remotely analogous to being alive will have stopped.
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
This depends on the your preferences and, as such, is not generally true of all consequentialist systems.
If you generalize consequentialism to mean ‘whatever supports your preferences’, then you’ve expanded it beyond an ethical system to include most decision-making systems. We’re not discussing consequentialism in the general sense, either.
Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers. That form of morality is exactly what would be used by Hostile AI, with a justification similar to “I wish to create as many replicating nanomachines as possible, therefore any action which produces fewer, like failing to consume refined materials such as ‘structural supports’, is immoral.” A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
I suppose it is technically possible to believe that donating one out of two functioning kidneys is a worse consequence than living with no functioning kidneys. Of course, since the major component of donating a kidney is the surgery, and a similar surgery is needed to receive a kidney, there is either a substantial weighting towards oneself, or one would not accept a donated kidney if suffering from total renal failure. (Any significant weighting towards oneself results in the act of returning excess change to be immoral in strict consequentialism, assuming that the benefit to oneself is precisely equal to the loss to a stranger).
Consequentialism is the class of normative ethical theories holding that the consequences of one’s conduct are the ultimate basis for any judgment about the rightness of that conduct. Thus, from a consequentialist standpoint, a morally right act (or omission) is one that will produce a good outcome, or consequence.
I’m with you here.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.
Are you suspicious of all ethical systems on general principle, or is it only the ones that can be easily followed that you suspect, or some other possibility?
Because donating a kidney IS fairly easy to do. So easy, in fact, that when I realized that I really, really, don’t want to, I had to come to terms with the fact that I needed to reevaluate either morality or my character.
We must have different standards of what easy is, if donating a kidney strikes you as an easy way to help people as compared to, say, donating a thousand bucks to GiveWell’s top charity.
Which doesn’t answer the point. If ease/low standards doesn’t matter to evaluating a theory of ethics, then your questions about kidney are just irrelevant and rhetoric; if ease does matter in deciding whether a theory of ethics is correct or not, why do you implicitly seem to think that easiness is the default and high standards (like in utilitarianism) need to be justified?
I’m not measuring a standard of ethics by looking at the people who support it. I’m saying that if the people who claim to support a ethical principle violate it without considering themselves either immoral or hypocrites, then they believe something different from what they think they believe.
And donating to charity until I become a charity case is unreasonable- if donating to charity is a moral obligation, at what point does it stop being a moral obligation?
without considering themselves either immoral or hypocrites, then they believe something different from what they think they believe.
Is ‘immoral’ the best word to use in this context? If you asked them, ‘do you think you are as moral as possible or are doing the very most optimal things to do?‘, I suspect most of them would answer ‘no’. Problem solved, apparently, if that was what you really meant all along...
And donating to charity until I become a charity case is unreasonable- if donating to charity is a moral obligation, at what point does it stop being a moral obligation?
You already explained at what point donating stops. As for ‘unreasonable’, I think that’s more rhetoric on your part since I’m not sure where exactly in reason we can find the one true ethics which tells us to eat, drink, be merry, and stop donating well before that point. If it’s really unreasonable, you’re going to be picking fights with an awful lot of religions, I’d also point out, who didn’t seem to find it unreasonable behavior on the parts of ethical paragons like saints and monks.
“Are you currently violating the moral principles you believe in?” would be the best phrasing.
From one standpoint, it becomes unreasonable when there is something else that I would rather do with that money. Coincidentally, that happens to be exactly the principle I use to decide how much I donate to charity.
There recently was a post on LW (to which I’ll provide a link as soon as I get behind a ‘proper’ computer rather than a smartphone) making the point that the expected number of lives you save is much higher if you donate $400 than if you donate a kidney, so if you’re indifferent between losing $400 and losing a kidney (and given what that post said about the inconvenience of the surgery for kidney explantation, I’d say $400 is even a conservative estimate) you’d better donate the former. (FWIW, I have agreed to donate my organs after death—more precisely, it’s opt-out in my country, but I know how to opt out and haven’t done so.)
Oh, so I suppose you have neither $400 nor any redundant organs, then? I was ignoring the hypocrisy of not being impoverished, because not having any significant amount of money has larger long-term effects than not having an extra kidney.
Death coming sooner is, in itself, no more or less a moral issue than a train leaving the station early, before all the ticketed passengers have boarded.
So you wouldn’t mind dying tomorrow rather than in forty years, would you?
In other news, over 91,000 people have died since midnight EST.
Most everyone dies sooner or later, artificially and knowingly making it sooner is where the ethical and legal issues start.
It’s where the legal issues start, certainly. But I would argue that ethically, what matters is how easily any of those 91,012 lives could have been saved. And many could have been saved very easily with malaria nets.
Donate page for anyone suddenly moved to do so
Just did, thanks for the reminder. Maybe we should put together an LW donations page on there to link to and encourage donation via peer pressure?
I’m pretty sure that it’s a hell of a lot easier to avoid shooting at people in a cinema than to earn enough money for the AMF to save a dozen lives. I do the former all the time—in fact, I’m doing that right now as I’m typing.
Yes, but it’s a lot harder for us as a society to prevent people from doing all random acts of violence like that.
It’s much easier to directly save one person from malaria, than to save one person from a mad gunman. Not just on the societal level, as RobertLumley stated, but as simple individual actions.
I guess I misunderstood his point, then. I took “ethically, what matters” to mean ‘what matters to the question how bad a guy the gunman was, compared to how bad a guy or gal the typical person is’. There was an action X the gunman could have done such that, if counterfactually that day the gunman had done X instead of what he actually did, at the end of the day there would have been 12 fewer dead people—namely, staying out of the cinema. There was no such obvious action in my case—at least, none which wouldn’t have left me in several thousand dollars of debt.
How do you know it wasn’t 91,011, or 91,013? :-)
I was just adding the 90,000 figure I used and then the 12. It was rhetorical.
Of course! [slaps forehead]
Seems like you have an ax to grind and so are getting completely off-topic. Time for me to disengage.
I’m curious as to why this has been downvoted. Isn’t tapping out generally considered the polite thing to do around here?
Hmm, I’m guessing that it is because yours and EY’s stance (if I understand it right) is along the lines of “every life is sacred, every life is great” and is a common sentiment on LW; and that’s why your comment was upvoted and mine downvoted (probably misunderstood as “he has no valid argument to offer and so disguised this fact by tapping out”). Again, this is only a guess.
I certainly don’t think consciously, or act as if, “every life is sacred, every life is great”.
Nevertheless the people I care about personally, and myself, are still far more likely to die from some disease that is curable but is not eradicated due to lack of funds—including most or all causes of natural death—than due to the actions of madmen, gunmen, evil biology professors, or their tiny intersection.
Which is why when I read news like in this post, I think: “why am I wasting my time thinking about this?”
Hm. Well, FWIW, I don’t think it should have been.
None of them can be saved. Death can only be delayed.
By that logic, the death of the people who were shot by the student was only advanced.
Death coming sooner is, in itself, no more or less a moral issue than a train leaving the station early, before all the ticketed passengers have boarded.
The immoral act was shooting them, not failing to give them mosquito nets.
In case you’re wondering why everyone is downvoting you, it’s because pretty much everyone here disagrees with you. Most LWers are consequentialist. As one result of this, we don’t think there’s much of a difference between killing someone and letting them die. See this fantastic essay on the topic.
(Some of the more pedantic people here will pick me up on some inaccuracies in my previous sentence. Read the link above, and you’ll get a more nuanced view.)
I just read some of your comment history, and it looks like I wrote that a bit below your level. No offense intended. I’ll leave what I wrote above there for reference of people who don’t know.
No problem. You clearly communicated what you intended to, which is never a problem.
From the link, though:
‘Die trying’, is one moral answer. ‘Gain permission from the child’ is another. ‘Perform an immoral act for the greater good’ is a third answer. I choose not to make the claim “In some cases you should non-consensually kick a small child in the face because hurting people is bad.”
‘Die trying’ doesn’t save the 101 people. If anything, I’d think about the TDT-related benefits of having precommitted to not giving in to blackmail, but in this particular example it’s far from clear that the king wouldn’t have offered you the deal in the first place had he been sure you were going to refuse it—though it is in most similar situations I’m actually likely to face in real life.
Two of the three actions I suggested saved 102 people (assuming that you aren’t one of the 100 innocent people). Two of them are possible in the least convenient universe. Two of them are moral. Those three tradeoffs are the only ones I considered- or do you consider kicking a child in the face to be a moral act?
There is no benefit to committing to not give into blackmail in this case, except that it might reduce the chances of the scenario happening. One of the advantages to noncompliance is that it reduces the chance of the scenario recurring- can you be blackmailed into kicking any number of children with the same 100 hostages?
I’m aware that my position is unpopular.
What proportion of consequentialist LWers have donated a kidney?
Why do you care? Do you plan to say that whatever fraction it is, it is too small, and this somehow discredits consequentialism itself?
I’m observing that most of them considers themselves an immoral hypocrite. Exceptions apply to people medically unsuitable for kidney donation.
The same observation applies to most consequentialists with two typical lungs, and to those who have not agreed to donate their organs postmortem.
From a strict consequentialist viewpoint, it is a moral imperative to have kidneys forcibly harvested (or harvested under threat of imprisonment) from people who are suitable and provided to people without functioning kidneys. (Assuming that the harm of having a kidney forcibly harvested, or harvested under the threat of imprisonment, isn’t on the same scale as the benefit of having a functional kidney.)
The fact that the conclusions which are necessarily drawn from the premises of consequentialism are absurd is what discredits consequentialism as the primary driver of moral decisions. Anyone who considers themselves moral, but has all of their original organs in good working order, agrees that the primary driver of morality is something other than general consequentialism.
No, they don’t.
I like my kidney. I value my kidney more than I value someone else having my kidney unless they are a relative or close friend. I have impolite words to say to anyone who dares suggest that me keeping my kidney is immoral.
You don’t understand consequentialism. The straw man you criticism more closely resembles some form of utilitarianism—which I do reject as abhorrent.
So, you think that the inconvenience of surgery are more significant than the inconvenience of requiring dialysis, because the inconvenience of surgery will be borne by you but the inconvenience of dialysis will be borne by a stranger.
I don’t see anything wrong with that morality, but it isn’t mainstream consequentialism to value oneself that much more highly than others. You also consider it moral to steal from strangers, if there was no chance of getting caught, or to perform any other action where the ratio of benefit to you to damage to strangers was at least as good as the ratio involved in the kidney calculation, right?
I am fairly confident that you are mistaken about what mainstream consequentialism asserts, see wikipedia for instance.
I also think the original downvoting occurred not due to non-consequentialist thinking but due to the probably false claim that death is inevitable.
I think that I have struck precisely at the flaw in mainstream consequentialism that I was aiming at- It is an inconsistent position for somebody in good overall health to not donate a kidney and a lung, but to correct the cashier when they have received too much change.
Has there been a physics breakthrough of which I am unaware? Is there a way to reduce entropy in an isolated system?
Because once there isn’t enough delta-T left for any electron to change state, everything even remotely analogous to being alive will have stopped.
This depends on the your preferences and, as such, is not generally true of all consequentialist systems.
If you generalize consequentialism to mean ‘whatever supports your preferences’, then you’ve expanded it beyond an ethical system to include most decision-making systems. We’re not discussing consequentialism in the general sense, either.
I’m rejecting the cases where what is ‘good’ or ‘morally right’ is defined as being whatever one prefers. That form of morality is exactly what would be used by Hostile AI, with a justification similar to “I wish to create as many replicating nanomachines as possible, therefore any action which produces fewer, like failing to consume refined materials such as ‘structural supports’, is immoral.” A system which makes literally whatever you want the only moral choice doesn’t provide any benefits over a lack of morality.
I suppose it is technically possible to believe that donating one out of two functioning kidneys is a worse consequence than living with no functioning kidneys. Of course, since the major component of donating a kidney is the surgery, and a similar surgery is needed to receive a kidney, there is either a substantial weighting towards oneself, or one would not accept a donated kidney if suffering from total renal failure. (Any significant weighting towards oneself results in the act of returning excess change to be immoral in strict consequentialism, assuming that the benefit to oneself is precisely equal to the loss to a stranger).
I’m with you here.
You’ve removed a set of consequentialist theories—consequentialist theories dependent on preferences fit the definition you give above. So you can’t say that consequentialism implies an inconsistency in the example you gave. You can say that this restricted subset of consequentialism implies such a consistency.
On a side note:
This suggests to me that you don’t understand the preference based consequentialist moral theory that is somewhat popular around here. I’m just warning you before you get into what might be fruitless debates.
I’ll bite- what benefit does is provided by any moral system that defines ‘morally right’ to be ‘that which furthers my goals’, and ‘morally wrong’ to be ‘that which opposes or my goals’, over the absence of a moral system, in which instead of describing those actions in moral terms I describe those actions in terms of personal preference?
If you prefer, you can substitute ‘the goals of the actor’ for ‘my goals’, but then you must concede that it is impossible for any actor to want to take an immoral action, only for an actor to be confused about what their goals are or mistaken about what the results of an action will be.
A moral system that is based on preferences is not equivalent to those preferences. Specifically, a moral system is what you need when preferences contradict, either with other entities (assuming you want your moral system to be societal) or with each other. From my point of view, a moral system should not change from moment to moment, though preferences may and often do. As an example: The rule “Do not Murder” is an attempt to resolve either a societal preference vs individual desires or to impose a more reflective decision-making on the kind of decisions you may make in the heat of the moment (or both). Assuming my desire to live by a moral code is strong, then having a code that prohibits murder will stop me from murdering people in a rage, even though my preferences at that moment are to do so, because my preference over the long term is not to.
Another purpose of a moral system is to off-load thinking to clear moments. You can reflectively and with foresight make general moral precepts that lead to better outcomes that you may not be able to decide on a case by case basis at anything approaching enough speed.
It’s late at night and I’m not sure how clear this is.
First of all, if you desire to follow a moral code which prohibits murder more than you desire to murder, then you do not want to murder, any more than if you desire to buy a candy bar for $1 if want $1 more than you want the candy bar.
Now, consider the class of rules that require maximizing a weighted average or sum of everyone’s preferences. Within that class, ‘do not murder’ is a valid rule, considering that people wish to avoid being murdered and also to live in a world which is in general free from murder. ‘Do not seize kidneys’ is marginally valid. The choice ‘I choose not to donate my kidney’ is valid only if one’s own preference is weighted more highly than the preference of a stranger. The choice ‘I will try to find the person who dropped this, even though I would rather keep it.’ is moral only if the preferences of a stranger are weighted equally or greater to one’s own.
Personally, I would be suspicious of any ethical system in which perfection was so easy that a nontrivial fraction of adherents were perfect.
Are you suspicious of all ethical systems on general principle, or is it only the ones that can be easily followed that you suspect, or some other possibility?
The easily followed ones.
What makes you think that any system is easily followed in all common circumstances?
What makes you think, then, that any discussion of donation rates or ‘hypocrisy’ is of any interest or relevance?
Because donating a kidney IS fairly easy to do. So easy, in fact, that when I realized that I really, really, don’t want to, I had to come to terms with the fact that I needed to reevaluate either morality or my character.
We must have different standards of what easy is, if donating a kidney strikes you as an easy way to help people as compared to, say, donating a thousand bucks to GiveWell’s top charity.
Which doesn’t answer the point. If ease/low standards doesn’t matter to evaluating a theory of ethics, then your questions about kidney are just irrelevant and rhetoric; if ease does matter in deciding whether a theory of ethics is correct or not, why do you implicitly seem to think that easiness is the default and high standards (like in utilitarianism) need to be justified?
I’m not measuring a standard of ethics by looking at the people who support it. I’m saying that if the people who claim to support a ethical principle violate it without considering themselves either immoral or hypocrites, then they believe something different from what they think they believe.
And donating to charity until I become a charity case is unreasonable- if donating to charity is a moral obligation, at what point does it stop being a moral obligation?
Is ‘immoral’ the best word to use in this context? If you asked them, ‘do you think you are as moral as possible or are doing the very most optimal things to do?‘, I suspect most of them would answer ‘no’. Problem solved, apparently, if that was what you really meant all along...
You already explained at what point donating stops. As for ‘unreasonable’, I think that’s more rhetoric on your part since I’m not sure where exactly in reason we can find the one true ethics which tells us to eat, drink, be merry, and stop donating well before that point. If it’s really unreasonable, you’re going to be picking fights with an awful lot of religions, I’d also point out, who didn’t seem to find it unreasonable behavior on the parts of ethical paragons like saints and monks.
“Are you currently violating the moral principles you believe in?” would be the best phrasing.
From one standpoint, it becomes unreasonable when there is something else that I would rather do with that money. Coincidentally, that happens to be exactly the principle I use to decide how much I donate to charity.
There recently was a post on LW (to which I’ll provide a link as soon as I get behind a ‘proper’ computer rather than a smartphone) making the point that the expected number of lives you save is much higher if you donate $400 than if you donate a kidney, so if you’re indifferent between losing $400 and losing a kidney (and given what that post said about the inconvenience of the surgery for kidney explantation, I’d say $400 is even a conservative estimate) you’d better donate the former. (FWIW, I have agreed to donate my organs after death—more precisely, it’s opt-out in my country, but I know how to opt out and haven’t done so.)
Oh, so I suppose you have neither $400 nor any redundant organs, then? I was ignoring the hypocrisy of not being impoverished, because not having any significant amount of money has larger long-term effects than not having an extra kidney.
So you wouldn’t mind dying tomorrow rather than in forty years, would you?