I can’t see any flaws in the argument, but the conclusion is far more radical than most of us would be willing to admit.
Am I the sort of person who would value my computer over another human being’s life? I hope not, that makes me sound like the most horrible sort of psychopath—it is basically the morality of Stalin. But at the same time, did I sell my computer to feed kids in Africa? I did not. Nor did any of you, unless you are reading this at a library computer (in which case I’m sure I can find something you could have given up that would have allowed you to give just a little bit more to some worthy charitable cause.)
It gets worse: Is my college education worth the lives of fifty starving children? Because I surely paid more than that. Is this house I’m living in worth eight hundred life-saving mosquito nets? Because that’s how much it cost.
Our entire economic system is based on purchases that would be “unjustified”—even immoral—on the view that every single purchase must be made on this kind of metric. And so if we all stopped doing that, our economy would collapse and we would be starving instead.
I think it comes down to this: Consequentialism is a lot harder than it looks. It’s not enough to use the simple heuristic, “Is this purchase worth a child’s life?”; no, you’ve got to carry out the full system of consequences—in principle, propagated to our whole future light cone. (In fact, there’s a very good reason not to ask that question: Because of our socialization, we have a taboo in our brains about never saying that something is worth more than a child—even when it obviously is.) You’ve got to note that once the kid survives malaria, he’ll probably die of something else, like malnutrition, or HIV, or a parasite infection. You’ve got to note that if people didn’t go to college and become scientific researchers, we wouldn’t even know about HIV or malaria or anything else. You’ve got to keep in mind the whole system of modified capitalism and the social democratic welfare state that makes your massive wealth possible—and really, I think you should be trying to figure out how to export it to places that don’t have it, not skimming off the income that drives it to save one child’s life at a time.
And if you think, “Ah ha! We’ll just work for the Singularity then!” well, that’s a start—and you should, in fact, devote some of your time, energy, and money to the Singularity—but it’s not a solution by itself. How much time should you spend trying to make yourself happy? How much effort should you devote to your family, your friends? How important is love compared to what you might be doing—and how much will your effectiveness depend on you being loved? We might even ask: Would we even want to make a Singularity if it meant that no one ever fell in love?
This is why I’m not quite a gung-ho consequentialist. Ultimately consequentialism is right, there can be no doubt about that; but in practical terms, I don’t think most people are smart enough for it. (I’m not sure I’m smart enough for it.) It might be better, actually, to make people follow simple rules like “Don’t cheat, don’t lie, don’t kill, don’t steal”; if everyone followed those rules, we’d be doing all right. (Most of the really horrible things in this world are deontic violations, like tyranny and genocide.) At the very least, the standard deontic rules are better heuristics than asking, “Is it worth the life of a child?”
Most of the really horrible things in this world are deontic violations, like tyranny and genocide.
Disagree. Most of the really horrible things in this world are just accidents that not enough people are paying attention to. If animals can suffer then millions of Holocausts are happening every day. If insects can suffer then tens of billions are. In any case humans can certainly suffer, and they’re doing plenty of that from pure accident. Probably less than a twentieth of human suffering is intentionally caused by other humans. (Though I will say that the absolute magnitude of human-intent-caused human suffering is unbelievably huge.)
Upvoted.
I really like this comment because it shows some of my own concerns about consequentialism. For example I have decided that for most cases the deontic answers fit the consequentialist ones so well that we should start out following them and only if they appear to be nonsatisfactory we should dive into consequentialist reasoning. This quite leads to some peace of mind, but it obviously is the easy answer, not the correct one…
Is there a post on lesswrong for deontology as a subset of consequentialism? (According to wikipedia there seem to be some scientists that state a similar opinion.)
The utilitarian philosopher RM Hare has proposed a solution along the lines you suggest, it’s called two-level utilitarianism. From Wikipedia:
As a descriptive model of the two levels, Hare posited two extreme cases of people, one of whom would only use critical moral thinking and the other of whom would only use intuitive moral thinking. The former he called the ‘archangel’ and the latter the ‘prole’.
I think the concept has merit, but if you’re smart and willing enough to do it, you’d have to act according to the “critical level” (conventional consequentialism) anyway.
Of course, that’s why I would call myself a consequentialist even though I mainly/very often argue by using deontic principles. I wasn’t talking about theory (or foundation), but about the practicality/practical use of deontic reasoning versus consequentialism.
I have yet to familiarize myself more with effective altruism to know the details of their metrics, but it seems like the reliance on ‘number of lives saved per unit money’ doesn’t necessarily align with the goal of helping people, which i think this post demonstrates well. And then there’s the arguably relevant issue of over-population. If everyone contributed some of their education funding on saving lives, wouldn’t the Earth get over-populated before sufficient technological progress was made to e.g inhabit another planet?
I can’t see any flaws in the argument, but the conclusion is far more radical than most of us would be willing to admit.
Am I the sort of person who would value my computer over another human being’s life? I hope not, that makes me sound like the most horrible sort of psychopath—it is basically the morality of Stalin. But at the same time, did I sell my computer to feed kids in Africa? I did not. Nor did any of you, unless you are reading this at a library computer (in which case I’m sure I can find something you could have given up that would have allowed you to give just a little bit more to some worthy charitable cause.)
It gets worse: Is my college education worth the lives of fifty starving children? Because I surely paid more than that. Is this house I’m living in worth eight hundred life-saving mosquito nets? Because that’s how much it cost.
Our entire economic system is based on purchases that would be “unjustified”—even immoral—on the view that every single purchase must be made on this kind of metric. And so if we all stopped doing that, our economy would collapse and we would be starving instead.
I think it comes down to this: Consequentialism is a lot harder than it looks. It’s not enough to use the simple heuristic, “Is this purchase worth a child’s life?”; no, you’ve got to carry out the full system of consequences—in principle, propagated to our whole future light cone. (In fact, there’s a very good reason not to ask that question: Because of our socialization, we have a taboo in our brains about never saying that something is worth more than a child—even when it obviously is.) You’ve got to note that once the kid survives malaria, he’ll probably die of something else, like malnutrition, or HIV, or a parasite infection. You’ve got to note that if people didn’t go to college and become scientific researchers, we wouldn’t even know about HIV or malaria or anything else. You’ve got to keep in mind the whole system of modified capitalism and the social democratic welfare state that makes your massive wealth possible—and really, I think you should be trying to figure out how to export it to places that don’t have it, not skimming off the income that drives it to save one child’s life at a time.
And if you think, “Ah ha! We’ll just work for the Singularity then!” well, that’s a start—and you should, in fact, devote some of your time, energy, and money to the Singularity—but it’s not a solution by itself. How much time should you spend trying to make yourself happy? How much effort should you devote to your family, your friends? How important is love compared to what you might be doing—and how much will your effectiveness depend on you being loved? We might even ask: Would we even want to make a Singularity if it meant that no one ever fell in love?
This is why I’m not quite a gung-ho consequentialist. Ultimately consequentialism is right, there can be no doubt about that; but in practical terms, I don’t think most people are smart enough for it. (I’m not sure I’m smart enough for it.) It might be better, actually, to make people follow simple rules like “Don’t cheat, don’t lie, don’t kill, don’t steal”; if everyone followed those rules, we’d be doing all right. (Most of the really horrible things in this world are deontic violations, like tyranny and genocide.) At the very least, the standard deontic rules are better heuristics than asking, “Is it worth the life of a child?”
This is a super-duper nice comment.
Disagree. Most of the really horrible things in this world are just accidents that not enough people are paying attention to. If animals can suffer then millions of Holocausts are happening every day. If insects can suffer then tens of billions are. In any case humans can certainly suffer, and they’re doing plenty of that from pure accident. Probably less than a twentieth of human suffering is intentionally caused by other humans. (Though I will say that the absolute magnitude of human-intent-caused human suffering is unbelievably huge.)
Upvoted. I really like this comment because it shows some of my own concerns about consequentialism. For example I have decided that for most cases the deontic answers fit the consequentialist ones so well that we should start out following them and only if they appear to be nonsatisfactory we should dive into consequentialist reasoning. This quite leads to some peace of mind, but it obviously is the easy answer, not the correct one… Is there a post on lesswrong for deontology as a subset of consequentialism? (According to wikipedia there seem to be some scientists that state a similar opinion.)
The utilitarian philosopher RM Hare has proposed a solution along the lines you suggest, it’s called two-level utilitarianism. From Wikipedia:
I think the concept has merit, but if you’re smart and willing enough to do it, you’d have to act according to the “critical level” (conventional consequentialism) anyway.
Your actual values are the ones that determine “what appears satisfactory”.
Of course, that’s why I would call myself a consequentialist even though I mainly/very often argue by using deontic principles. I wasn’t talking about theory (or foundation), but about the practicality/practical use of deontic reasoning versus consequentialism.
I have yet to familiarize myself more with effective altruism to know the details of their metrics, but it seems like the reliance on ‘number of lives saved per unit money’ doesn’t necessarily align with the goal of helping people, which i think this post demonstrates well. And then there’s the arguably relevant issue of over-population. If everyone contributed some of their education funding on saving lives, wouldn’t the Earth get over-populated before sufficient technological progress was made to e.g inhabit another planet?