Is there a good rebuttal to why we don’t donate 100% of our income to charity? I mean, as an explanation tribality / near—far are ok, but is there a good justification post-hoc?
Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.
Some can’t survive on less or have other obligations that looks like charity (child support)
We would have less initiative to earn more
It would hurt our economy, as it is consumer driven. We must buy Iphones
I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
I pay taxes and it is like charity.
I know better how to spent money on my needs.
Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
“Don’t wanna”, shading into “Make Me” if they press. Anyone trying to tell you what to do isn’t your Real Dad! (Unless they are, in which case maybe try and figure out what’s going on.)
A mother that followed that logic would push her own baby in front of a trolley to save five random strangers. Ask yourself if that is the moral framework you really want to follow.
Read it already. Let’s be clear: you think the mother should push her baby in front of a trolley to save five random strangers? If so, why? If not, why not? I don’t consider this a loaded question—it falls directly out of the utilitarian calculus and assumed values that leads to “donate 100% to charities.”
[Let’s assume the strangers are also same-age babies, so there’s no weasel ways out (“baby has more life ahead of it”, etc.)]
Devil’s advocate: Humanity is in a malthusian trap where those mothers that prefer their child to five strangers are more able to pass on their genes, so that’s the sort of behavior that ends up universal. That mechanism of course produced all our preferences, but without the sanctity of it we are at least in a situation where mothers everywhere can have a debate on preserving our preferences versus saving more people, and Policy Debates Should Not Appear One-Sided.
There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it’s an emotionally biasing topic, you’ve got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you’ve got the fact that a child raised by a mother who is willing to do it has a greater chance of being raised in such a way as to have a net positive impact on society. Then you have the greater potential for preventing the situation in the future, caused by the increased visibility of the higher death toll. I’m certain there are more aspects I’m failing to note.
But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let’s scale the problem up for a moment. Say instead of 5 it’s 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.
But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let’s scale the problem up for a moment. Say instead of 5 it’s 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.
Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie’s choice “would you rather...” game). The mother should still act to protect her child. I’m not joking.
You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.
But I don’t think this is necessary—you don’t need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn’t want to change this, IMHO. And I think the OP’s question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn’t have asked the question...
I think it’s okay for one person to value some lives more than others, but not that much more. (“Okay”—not ideal in theory, maybe a good thing given other facts about reality, I wouldn’t want to tear it down for multiple reasons.)
Btw, you say the mother should protect her child, but it’s okay to value some lives more than others—these seem in conflict. Do you in fact think it’s obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?
We’ve now delved beyond the topic—which is okay, I’m just pointing that out.
I think it’s okay for one person to value some lives more than others, but not that much more.
I’m not quite sure what you mean by that. I’m a duster, not a torturer, which means that there are some actions I just won’t do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?
I also think that these scenarios usually devolve into a “would you rather...” game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
Btw, you say the mother should protect her child, but it’s okay to value some lives more than others—these seem in conflict. Do you in fact think it’s obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?
If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don’t on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.
Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other’s happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It’s more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.
To finally circle back to your question, I’m not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I’m saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don’t exactly share our values (I value my kids, they value theirs).
I’ve previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao’s Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style “shut up and multiply” utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless “save the world” work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow’s hierarchy that leaves him feeling guilty and thinking he’s a bad person.
My own opinion and advice? Work your way up up Maslow’s hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.
I think I basically agree with the “embrace existing moral intuitions” bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that’s not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.
I also think that these scenarios usually devolve into a “would you rather...” game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
Can you expand on this a bit? (Full disclosure I’m still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they’re honest answers and that people are actually bound by their morals (or are at least answering as though they are, which I believe to be implicit in the question).
For example, I’m also a duster, and that “would you rather” taught me a great deal about my morality. (Although to be fair what it taught me is certainly not what was intended, which was that my moral system is not strictly multiplicative but is either logarithmic or exponential or some such function where a non-zero number that is sufficiently small can’t be significantly increased simply by having it apply to significantly multiple people.)
This is deserving of a much longer answer which I have not had the time to write and probably won’t any time soon, I’m sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.
Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn’t some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.
Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn’t enlighten us about the hole digging nature at all.
Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I’d rather have dust in no one’s eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn’t tell us much about our underlying desires because there isn’t some consistent mathematical utility function underlying our responses. At best it just reveals how we’ve been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.
Ah, as it happens, I have none of those conflicts. I asked because I’m preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory. But I can understand that someone who believes this might have a lot of internal struggles.
Full disclosure: I’m a Duster, not a Torturer. But I’m trying to steelman Torture.
Ah I did (at the time), but forgot it was you that made those comments. So I should direct my question to Jacobian, not you.
In any case I’m certainly not a “save the world” type of person, and find myself thoroughly confused by those who profess to be and enter into self-destructive behavior as a result.
100% doesn’t work because then you starve. If I re-formulate your question to “is there any rebuttal to why we don’t donate way more to charity than we currently do” then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.
Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can’t pronounce. I just don’t assign the same weight to all people. That is perfectly consistent with utilitarianism.
“Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility.” -Wikipedia
The very next sentence starts with “Utility is defined in various ways...” It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as “the greatest good for the greatest number” but the clutch is in the word “good” which is left undefined. This is as opposed to, say, virtue ethics which doesn’t care per se about the consequences of actions.
If I re-formulate your question to “is there any rebuttal to why we don’t donate way more to charity than we currently do” then the answer depends on your belief system.
(And also on how much money you currently donate to charity.)
Is there a good rebuttal to why we don’t donate 100% of our income to charity? I mean, as an explanation tribality / near—far are ok, but is there a good justification post-hoc?
100%? Well, your future charitable donations will be markedly curtailed after you starve to death.
Some possible argument against charities. Personally I think that it is normal to donate around 1 per cent of income in form of charity support.
Some can’t survive on less or have other obligations that looks like charity (child support)
We would have less initiative to earn more
It would hurt our economy, as it is consumer driven. We must buy Iphones
I do many useful things which intended on helping other people, but I need pleasures to recreate my commitments, so I spend money on myself.
I pay taxes and it is like charity.
I know better how to spent money on my needs.
Human psychology is about summing different values in one brain, so I could spent only part of my energy on charity.
If I buy goods, my money goes to working people, so it is like charity for them. If I stop buying goods, they will be jobless and will need charity money for survive. So the more I give for charity, the more people need it.
If you overdonate, you could flip-flop and start to hate the thing. Especially if you find that your money was not spent effectively.
Donating 100 per cent will make you look crazy in views of some, and their will to donate diminish.
If you spent more on yourself, you could ask higher salary and as result earn more and donate more. Only a homeless and jobless person could donate 100 per cent.
“Don’t wanna”, shading into “Make Me” if they press. Anyone trying to tell you what to do isn’t your Real Dad! (Unless they are, in which case maybe try and figure out what’s going on.)
I mean, Laffer Curve-type reasons if nothing else.
A mother that followed that logic would push her own baby in front of a trolley to save five random strangers. Ask yourself if that is the moral framework you really want to follow.
Hey, look here, you totally should. All that emotional empathy just gets in the way.
Read it already. Let’s be clear: you think the mother should push her baby in front of a trolley to save five random strangers? If so, why? If not, why not? I don’t consider this a loaded question—it falls directly out of the utilitarian calculus and assumed values that leads to “donate 100% to charities.”
[Let’s assume the strangers are also same-age babies, so there’s no weasel ways out (“baby has more life ahead of it”, etc.)]
Devil’s advocate: Humanity is in a malthusian trap where those mothers that prefer their child to five strangers are more able to pass on their genes, so that’s the sort of behavior that ends up universal. That mechanism of course produced all our preferences, but without the sanctity of it we are at least in a situation where mothers everywhere can have a debate on preserving our preferences versus saving more people, and Policy Debates Should Not Appear One-Sided.
Ok, sure. But this does not answer the question of why we should change morals.
So that our morals become invariant under change of context, in this case, which person’s mother you happen to be.
… and why should that matter at all?
It seems we’ve now reduced to a value that is both abstract and arbitrary.
There are a lot of conflicting aspects to consider here outside of a vacuum. Discounting the unknown unknowns, which could factor heavily here since it’s an emotionally biasing topic, you’ve got the fact that the baby is going to be raised by an assumably attentive mother, as opposed to the 5 who wound up in that situation once, showing at least some increased risk of falling victim to such a situation again. Then you have the psychological damage to the mother, which is going to be even greater because she had to do the act herself. Then you’ve got the fact that a child raised by a mother who is willing to do it has a greater chance of being raised in such a way as to have a net positive impact on society. Then you have the greater potential for preventing the situation in the future, caused by the increased visibility of the higher death toll. I’m certain there are more aspects I’m failing to note.
But, if we cut to what I believe is the heart of your point, then yes, she absolutely should. Let’s scale the problem up for a moment. Say instead of 5 it’s 500. Or 5 million. Or the entire rest of humanity aside from the mother and her baby. At what point does sacrificing her child become the right decision? Really, this boils down to the idea of shut up and multiply.
Never, in my opinion. Put every other human being on the tracks (excluding other close family members to keep this from being a Sophie’s choice “would you rather...” game). The mother should still act to protect her child. I’m not joking.
You can post-facto rationalize this by valuing the kind of societies where mothers are ready to sacrifice their kids, and indeed encouraged to save another life, vs. the world where mothers simply always protect their kids no matter what.
But I don’t think this is necessary—you don’t need to validate it on utilitarian grounds. Rather it is perfectly okay for one person to value some lives more than others. We shouldn’t want to change this, IMHO. And I think the OP’s question about donating 100% to charity, at the detriment of themselves, is symptomatic of the problems that arise from utilitarian thinking. After all if OP was not having internal conflict between internal morals and supposedly rational utilitarian thinking, he wouldn’t have asked the question...
I think it’s okay for one person to value some lives more than others, but not that much more. (“Okay”—not ideal in theory, maybe a good thing given other facts about reality, I wouldn’t want to tear it down for multiple reasons.)
Btw, you say the mother should protect her child, but it’s okay to value some lives more than others—these seem in conflict. Do you in fact think it’s obligatory to value some lives more than others, or do you think the mother is permitted to protect her child, or?
We’ve now delved beyond the topic—which is okay, I’m just pointing that out.
I’m not quite sure what you mean by that. I’m a duster, not a torturer, which means that there are some actions I just won’t do, no matter how many utilitons get multiplied on the other side. I consider it okay for one person to value another to such a degree that they are literally willing to sacrifice every other person to save the one, as in the mother-and-baby trolly scenario. Is that what you mean?
I also think that these scenarios usually devolve into a “would you rather...” game that is not very illuminating of either underlying moral values or the validity of ethical frameworks.
If I can draw a political analogy which may even be more than an analogy, moral decision making via utilitarian calculus with assumed equal weights to (sentient, human) life is analogous to the central planning of communism: from each what they can provide, to each what they need. Maximize happiness. With perfectly rational decision making and everyone sharing common goals, this should work. But of course in reality we end up with at best inefficient distribution of resources due to failures in planning or execution. The pragmatic reality is even worse: people don’t on the whole work altruistically for the betterment of society, and so you end up with nepotistic, kleptocratic regimes that exploit the wealth of the country for self-serving purpose of those on top.
Recognizing and embracing the fact that people have conflicting moral values (even if restricted to only the weights they place on other’s happiness) is akin to the enlightened self-interest of capitalism. People are given self-agency to seek personal benefits for themselves and those they care about, and societal prosperity follows. Of course in reality all non-libertarians know that there are a wide variety of market failures, and achieving maximum happiness requires careful crafting of incentive structures. It is quite easy to show mathematically and historically that restricting yourself to multi-agent games with Pareto optimal outcomes (capitalism with good incentives) restricts you from being able to craft all possible outcomes. Central planning got us to the Moon. Not-profit-maximizing thinking is getting SpaceX to Mars. It’s more profitable to mitigate the symptoms of AIDS with daily antiviral drugs than to cure the disease outright. Etc. But nevertheless it is generally capitalist societies that experience the most prosperity, as measured by quality of life, technological innovation, material wealth, or happiness surveys.
To finally circle back to your question, I’m not saying that it is right or wrong that the mother cares for her child to the exclusion of literally everyone else. Or even that she SHOULD think this way, although I suspect that is a position I could argue for. What I’m saying is that she should embrace the moral intuitions her genes and environment have impressed upon her, and not try to fight them via System 2 thinking. And if everyone does this we can still live in a harmonious and generally good society even though each of our neighbors don’t exactly share our values (I value my kids, they value theirs).
I’ve previously been exposed to the writings and artwork of peasants that lived through the harshest time of Chairman Mao’s Great Leap forward, and it remarkable how similar their thoughts, concerns, fears and introspectives can be to those who struggle with LW-style “shut up and multiply” utilitarianism. For example I spoke with someone at a CFAR workshop who has had a real psychological issues for a decade over internal conflict between selfless “save the world” work he feels he SHOULD be doing, or doing more of, and basic fulfillment of Maslow’s hierarchy that leaves him feeling guilty and thinking he’s a bad person.
My own opinion and advice? Work your way up up Maslow’s hierarchy of needs using just your ethical intuitions as a guide. Once you have the luxury of being at the top of the pyramid, then you can start to worry about self-actualization by working to change the underlying incentives that guide the efforts of our society and create our environmentally-driven value functions in the first place.
I think I basically agree with the “embrace existing moral intuitions” bit.
Unpacking my first paragraph in the other post, you might get: I prefer people to have moral intuitions that value their kids equally with others, but if they value their own kids a bit more, that’s not terrible; our values are mostly aligned; I expect optimisation power aplied to those values will typically also satisfy my own values. If they value their kids more than literally everyone else, that is terrible; our values diverge too much; I expect optimisation power appied to their values has a good chance of harming my own.
Can you expand on this a bit? (Full disclosure I’m still relatively new to Less Wrong, and still learning quite a bit that I think most people here have a firm grip on.) I would think they illuminate a great deal about our underlying moral values, if we assume they’re honest answers and that people are actually bound by their morals (or are at least answering as though they are, which I believe to be implicit in the question).
For example, I’m also a duster, and that “would you rather” taught me a great deal about my morality. (Although to be fair what it taught me is certainly not what was intended, which was that my moral system is not strictly multiplicative but is either logarithmic or exponential or some such function where a non-zero number that is sufficiently small can’t be significantly increased simply by having it apply to significantly multiple people.)
This is deserving of a much longer answer which I have not had the time to write and probably won’t any time soon, I’m sorry to say. But in short summary human drives and morals are more behaviorist that utilitarian. The utility function approximation is just that, an approximation.
Imagine you have a shovel, and while digging you hit a large rock and the handle breaks. What that shovel designed to break, in sense that its purpose was to break? No, shovels are designed to dig holes. Breakage, for the most part, is just an unintended side-effect of the materials used. Now in some cases things are intended to fail early for safety reasons, e,g, to have the shovel break before your bones will. But even then this isn’t some underlying root purpose. The purpose of the shovel is still to dig holes. The breakage is more a secondary consideration to prevent undesirable side effects in some failure modes.
Does learning that the shovel breaks when it exceeds normal digging stresses tell you anything about the purpose / utility function of the shovel? Pedantically, a little bit if you accept the breaking point being a designed-in safety consideration. But it doesn’t enlighten us about the hole digging nature at all.
Would you rather put dust in the eyes of 3^^^3 people, or torture one individual to death? Would you rather push one person onto the trolley tracks to save five others? These are failure mode analysis of edge cases. The real answer is I’d rather have dust in no one’s eyes and nobody tortured, and nobody hit by trolleys. Making an arbitrary what-if tradeoff between these scenarios doesn’t tell us much about our underlying desires because there isn’t some consistent mathematical utility function underlying our responses. At best it just reveals how we’ve been wired by genetics and upbringing and present environment to prioritize our behaviorist responses. Which is interesting, to be sure. But not very informative, to be honest.
Ah, as it happens, I have none of those conflicts. I asked because I’m preparing an article on utilitarianism, and I happened to bounce on the question I posted as a good proxy of the hard problems in adopting it as a moral theory.
But I can understand that someone who believes this might have a lot of internal struggles.
Full disclosure: I’m a Duster, not a Torturer. But I’m trying to steelman Torture.
Ah, then I look forward to reading your article :)
...and did you read my comments in the thread?
Ah I did (at the time), but forgot it was you that made those comments. So I should direct my question to Jacobian, not you.
In any case I’m certainly not a “save the world” type of person, and find myself thoroughly confused by those who profess to be and enter into self-destructive behavior as a result.
100% doesn’t work because then you starve. If I re-formulate your question to “is there any rebuttal to why we don’t donate way more to charity than we currently do” then the answer depends on your belief system. If you are utilitarian, the answer is definitive no. You should spend way more on charity.
Nonsense. I believe my life and the lives of people close to me are more important than someone starving in a place whose name I can’t pronounce. I just don’t assign the same weight to all people. That is perfectly consistent with utilitarianism.
Er… no. Utilitarianism prohibits that exact thing by design. That’s one of its most important aspects.
Read the definition. This is unambiguous.
“Utilitarianism is a theory in normative ethics holding that the best moral action is the one that maximizes utility.” -Wikipedia
The very next sentence starts with “Utility is defined in various ways...” It is entirely possible for there to be utility functions that treat sentient beings differently. John Stuart Mill may have phrased it as “the greatest good for the greatest number” but the clutch is in the word “good” which is left undefined. This is as opposed to, say, virtue ethics which doesn’t care per se about the consequences of actions.
(And also on how much money you currently donate to charity.)