Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.
Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others)
package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.
It’s also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan—as well as very nearly on par with India, the country you’re using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.
I don’t see how these two frameworks are appealing to different terminal values—they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of “disagreeing moral axioms” that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.
I think that is it, I’m trying to do utilitarianism. I’ve got some notion q of quality and quantity of life. It varies through time. How do I assess a long term policy, with short term sacrifices for better output in the long run? I integrate over time with a suitable weighting such as
e%5E{-\frac{t}{\tau}}%20dt)
What is the significance of the time constant tau? I see it as mainly a humility factor, because I cannot actually see into the future and know how things will turn out in the long run. Accordingly I give reduced weight to the future, much beyond tau, for better or worse, because I do not trust my assessment of either.
But is that an adequate response to human fallibility? My intuition is that one has to back it up with an extra rule: if my moral calculations suggest culling humans, its time to give up, go back to painting kitsch water colours and leave politics to the sane. That’s my interpretation of dspeyer’s phrase “my moral intuition is throwing error codes.” Now I have two rules, so Sod’s Law tells me that some day they are going to conflict.
Eliever’s post made an ontological claim, that a universe with only two kinds of things, physics and logic, has room for morality. It strikes me that I’ve made no dent in that claim. All I’ve managed to argue is that it all adds up to normality: we cannot see the future, so we do not know what to do for the best. Panic and tragic blunders ensue, as usual.
I interpreted Eliever’s questions as a response to the evocative phrase “my moral intuition is throwing error codes.” What does it actually mean? Can it be grounded in an actual situation?
Grounding it in an actual situation introduces complications. Given a real life moral dilemma it is always a good idea to look for a third option. But exploring those additional options doesn’t help us understand the computer programming metaphor of moral intuitions throwing error codes
My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn’t happen,… If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.
Establish free mother-and-baby clinics
Provide free food for the under fives
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.
Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on “aid as sabotage”, and the creation of permanent traps.
My original comment was a disaster, but since I’m pig-headed I’m going to have another go at saying what it might mean for ones moral intuitions to throw error codes:
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
Really? That’s your plan for “maximum holocaust”? You’ll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you’ll do nothing but good.
This sounds to me like a political applause light, especially
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
In essence, your statement boils down to “if I wanted to do the most possible harm, I would do what the Enemy are doing!” which is clearly a mindkilling political appeal.
(For reference, here’s my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
I’m afraid Franken Fran beat you to this story a while ago.
Hopefully this comment was intended as non-obvious form of satire, otherwise it’s completely nonsensical.
You’re—Mr. AlanCrowe that is—mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:
“Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”
You’re forgetting the “teach a man to fish” part entirely. Which should be enough—given the context—to explain what’s wrong with your reasoning. I could go on explaining further, but I don’t want to talk about such heinous acts, the ones you mentioned, unecessarily.
EDIT:
Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn’t quite that skindeep.
In anycase I think it’s extremely insensitive and rash to poorly excuse yourself of atrocities like these:
It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise.
In anycase you falsely created a polarity between different attempts of optimizing charity here:
A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen).
And then by means of trickery. you transformed it into “being unsympathetic now” + “sympathetic later” > “sympathetic now” > “more to be sympathetic about later”
However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later—nothing is gained.
Which to me looks like you’re just eager to throw sympathy out the windowin hopes of looking very rational in contrast. And with this false trickery you’ve made it look like these suffering people deserve what they get and there’s nothing you can do about it. You could also accompany options A and B with option C “Save as many children as possible and fight harder to raise money for schools and infrastructure as well” not to mention that you can give food to people who are building those schools and it’s not a zero-sum game.
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
I would be very happy that Dr. Evil appears to be maximally incompetent.
Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?
My actual situations are too complicated and I don’t feel comfortable discussing them on the internet. So here’s a fictional situation with real dilemmas.
Suppose I have a friend who is using drugs to self-destructive levels. This friend is no longer able to keep a job, and I’ve been giving him couch-space. With high probability, if I were to apply pressure, I could decrease his drug use. One axiomization says I should consider how happy he will be with an outcome, and I believe he’ll be happier once he’s sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he’ll be angry at my trying to run his life.
As a further twist, he consistently says different things depending on which drugs he’s on. One axiomization defines a person such that each drug-cocktail-personality is a separate person whose desires have moral weight. Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires—the desires with moral weight are the ones he would have if he were sober (and it’s up to me to deduce them from the evidence available).
My response to this situation depends on how he’s getting money for drugs given that he no longer has a job and also on how much of a hassle it is for you to give him couch-space. If you don’t have the right to run his life, he doesn’t have the right to interfere in yours (by taking up your couch, asking you for drug money, etc.).
I am deeply uncomfortable with the drug-cocktail-personalities-as-separate-people approach; it seems too easily hackable to be a good foundation for a moral theory. It’s susceptible to a variant of the utility monster, namely a person who takes a huge variety of drug cocktails and consequently has a huge collection of separate people in his head. A potentially more realistic variant of this strategy might be to start a cult and to claim moral weight for your cult’s preferences once it grows large enough…
(Not that I have any particular cult in mind while saying this. Hail Xenu.)
Edit: I suppose your actual question is how the content of this post is relevant to answering such questions. I don’t think it is, directly. Based on the subsequent post about nonstandard models of Peano arithmetic, I think Eliezer is suggesting an analogy between the question of what is true about the natural numbers and the question of what is moral. To address either question one first has to logically pinpoint “the natural numbers” and “morality” respectively, and this post is about doing the latter. Then one has to prove statements about the things that have been logically pointed to, which is a difficult and separate question, but at least an unambiguously meaningful one once the logical pinpointing has taken place.
The two contrasts you’ve set up (happiness vs. desire-satisfaction, and temporal-person-slices vs. unique-rationalized-person-idealization) aren’t completely independent. For instance, if you accept weighting all the temporal slices of the person equally, then you can weight all their desires or happinesses against each other; whereas if you take the ‘idealized rational transformation of my friend’ route, you can disregard essentially all of his empirical desires and pleasures, depending on just how you go about the idealization process. There are three criteria to keep in mind here:
Does your ethical system attend to how reality actually breaks down? Can we find a relatively natural and well-defined notion of ‘personal identity over time’ that solves this problem? If not, then that obviously strengthens the case for treating the fundamental locus of moral concern as a person-relativized-to-a-time, rather than as a person-extended-over-a-lifetime.
Does your ethical system admit of a satisfying reflective equilibrium? Do your values end up in tension with themselves, or underdetermining what the right choice is? If so, you may have taken a wrong turn.
Are these your core axiomatizations, or are they just heuristics for approximating the right utility-maximizing rule? If the latter, then the right question isn’t Which Is The One True Heuristic, but rather which heuristics have the most severe and frequent biases. For instance, the idealized-self approach has some advantages (e.g., it lets us disregard the preferences of brainwashed people in favor of their unbrainwashed selves), but it also has huge risks by virtue of its less empirical character. See Berlin’s discussion of the rational self.
Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires
I think that is simply factually wrong, meaning, it’s a false statement about your friends brain.
One axiomization says I should consider how happy he will be with an outcome, and I believe he’ll be happier once he’s sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he’ll be angry at my trying to run his life.
I think it comes down to this: you want your friend sober and happy, but your friends preferences and actions work against those values. The question is what kind of influence on him is allowed.
Can you be more concrete? Some past or present actual situation?
Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.
Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.
It’s also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan—as well as very nearly on par with India, the country you’re using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.
I don’t see how these two frameworks are appealing to different terminal values—they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of “disagreeing moral axioms” that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.
ISTM he’s not quite sure whether one QALY thirty years from now should be worth as much as one QALY now.
I think that is it, I’m trying to do utilitarianism. I’ve got some notion q of quality and quantity of life. It varies through time. How do I assess a long term policy, with short term sacrifices for better output in the long run? I integrate over time with a suitable weighting such as
e%5E{-\frac{t}{\tau}}%20dt)What is the significance of the time constant tau? I see it as mainly a humility factor, because I cannot actually see into the future and know how things will turn out in the long run. Accordingly I give reduced weight to the future, much beyond tau, for better or worse, because I do not trust my assessment of either.
But is that an adequate response to human fallibility? My intuition is that one has to back it up with an extra rule: if my moral calculations suggest culling humans, its time to give up, go back to painting kitsch water colours and leave politics to the sane. That’s my interpretation of dspeyer’s phrase “my moral intuition is throwing error codes.” Now I have two rules, so Sod’s Law tells me that some day they are going to conflict.
Eliever’s post made an ontological claim, that a universe with only two kinds of things, physics and logic, has room for morality. It strikes me that I’ve made no dent in that claim. All I’ve managed to argue is that it all adds up to normality: we cannot see the future, so we do not know what to do for the best. Panic and tragic blunders ensue, as usual.
Is permitting or perhaps even helping Haitians to emigrate to other countries anywhere in the moral calculus?
I interpreted Eliever’s questions as a response to the evocative phrase “my moral intuition is throwing error codes.” What does it actually mean? Can it be grounded in an actual situation?
Grounding it in an actual situation introduces complications. Given a real life moral dilemma it is always a good idea to look for a third option. But exploring those additional options doesn’t help us understand the computer programming metaphor of moral intuitions throwing error codes
So you’re facing a moral dilemma between giving to charity and murdering nine million people? I think I know what the problem might be.
My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn’t happen,… If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.
Establish free mother-and-baby clinics
Provide free food for the under fives
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.
Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on “aid as sabotage”, and the creation of permanent traps.
My original comment was a disaster, but since I’m pig-headed I’m going to have another go at saying what it might mean for ones moral intuitions to throw error codes:
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
Really? That’s your plan for “maximum holocaust”? You’ll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you’ll do nothing but good.
This sounds to me like a political applause light, especially
In essence, your statement boils down to “if I wanted to do the most possible harm, I would do what the Enemy are doing!” which is clearly a mindkilling political appeal.
(For reference, here’s my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)
I’m afraid Franken Fran beat you to this story a while ago.
Hopefully this comment was intended as non-obvious form of satire, otherwise it’s completely nonsensical.
You’re—Mr. AlanCrowe that is—mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:
“Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”
You’re forgetting the “teach a man to fish” part entirely. Which should be enough—given the context—to explain what’s wrong with your reasoning. I could go on explaining further, but I don’t want to talk about such heinous acts, the ones you mentioned, unecessarily.
EDIT: Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn’t quite that skindeep.
In anycase I think it’s extremely insensitive and rash to poorly excuse yourself of atrocities like these:
In anycase you falsely created a polarity between different attempts of optimizing charity here:
And then by means of trickery. you transformed it into “being unsympathetic now” + “sympathetic later” > “sympathetic now” > “more to be sympathetic about later”
However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later—nothing is gained.
Which to me looks like you’re just eager to throw sympathy out the window in hopes of looking very rational in contrast. And with this false trickery you’ve made it look like these suffering people deserve what they get and there’s nothing you can do about it. You could also accompany options A and B with option C “Save as many children as possible and fight harder to raise money for schools and infrastructure as well” not to mention that you can give food to people who are building those schools and it’s not a zero-sum game.
I would be very happy that Dr. Evil appears to be maximally incompetent.
Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?
(Are you sure you want this posted under what appears to be a real name?)
Don’t be absurd. How could advocating population control via shotgun harm one’s reputation?
When should seek the protection of anonymity? Where do I draw the line? On which side do pro-bestiality comments fall?
My actual situations are too complicated and I don’t feel comfortable discussing them on the internet. So here’s a fictional situation with real dilemmas.
Suppose I have a friend who is using drugs to self-destructive levels. This friend is no longer able to keep a job, and I’ve been giving him couch-space. With high probability, if I were to apply pressure, I could decrease his drug use. One axiomization says I should consider how happy he will be with an outcome, and I believe he’ll be happier once he’s sober and capable of taking care of himself. Another axiomization says I should consider how much he wants a course of action, and I believe he’ll be angry at my trying to run his life.
As a further twist, he consistently says different things depending on which drugs he’s on. One axiomization defines a person such that each drug-cocktail-personality is a separate person whose desires have moral weight. Another axiomization defines a person such that my friend is one person, but the drugs are making it difficult for him to express his desires—the desires with moral weight are the ones he would have if he were sober (and it’s up to me to deduce them from the evidence available).
My response to this situation depends on how he’s getting money for drugs given that he no longer has a job and also on how much of a hassle it is for you to give him couch-space. If you don’t have the right to run his life, he doesn’t have the right to interfere in yours (by taking up your couch, asking you for drug money, etc.).
I am deeply uncomfortable with the drug-cocktail-personalities-as-separate-people approach; it seems too easily hackable to be a good foundation for a moral theory. It’s susceptible to a variant of the utility monster, namely a person who takes a huge variety of drug cocktails and consequently has a huge collection of separate people in his head. A potentially more realistic variant of this strategy might be to start a cult and to claim moral weight for your cult’s preferences once it grows large enough…
(Not that I have any particular cult in mind while saying this. Hail Xenu.)
Edit: I suppose your actual question is how the content of this post is relevant to answering such questions. I don’t think it is, directly. Based on the subsequent post about nonstandard models of Peano arithmetic, I think Eliezer is suggesting an analogy between the question of what is true about the natural numbers and the question of what is moral. To address either question one first has to logically pinpoint “the natural numbers” and “morality” respectively, and this post is about doing the latter. Then one has to prove statements about the things that have been logically pointed to, which is a difficult and separate question, but at least an unambiguously meaningful one once the logical pinpointing has taken place.
The two contrasts you’ve set up (happiness vs. desire-satisfaction, and temporal-person-slices vs. unique-rationalized-person-idealization) aren’t completely independent. For instance, if you accept weighting all the temporal slices of the person equally, then you can weight all their desires or happinesses against each other; whereas if you take the ‘idealized rational transformation of my friend’ route, you can disregard essentially all of his empirical desires and pleasures, depending on just how you go about the idealization process. There are three criteria to keep in mind here:
Does your ethical system attend to how reality actually breaks down? Can we find a relatively natural and well-defined notion of ‘personal identity over time’ that solves this problem? If not, then that obviously strengthens the case for treating the fundamental locus of moral concern as a person-relativized-to-a-time, rather than as a person-extended-over-a-lifetime.
Does your ethical system admit of a satisfying reflective equilibrium? Do your values end up in tension with themselves, or underdetermining what the right choice is? If so, you may have taken a wrong turn.
Are these your core axiomatizations, or are they just heuristics for approximating the right utility-maximizing rule? If the latter, then the right question isn’t Which Is The One True Heuristic, but rather which heuristics have the most severe and frequent biases. For instance, the idealized-self approach has some advantages (e.g., it lets us disregard the preferences of brainwashed people in favor of their unbrainwashed selves), but it also has huge risks by virtue of its less empirical character. See Berlin’s discussion of the rational self.
I think that is simply factually wrong, meaning, it’s a false statement about your friends brain.
I think it comes down to this: you want your friend sober and happy, but your friends preferences and actions work against those values. The question is what kind of influence on him is allowed.