Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.
Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others)
package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.
It’s also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan—as well as very nearly on par with India, the country you’re using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.
I don’t see how these two frameworks are appealing to different terminal values—they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of “disagreeing moral axioms” that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.
I think that is it, I’m trying to do utilitarianism. I’ve got some notion q of quality and quantity of life. It varies through time. How do I assess a long term policy, with short term sacrifices for better output in the long run? I integrate over time with a suitable weighting such as
e%5E{-\frac{t}{\tau}}%20dt)
What is the significance of the time constant tau? I see it as mainly a humility factor, because I cannot actually see into the future and know how things will turn out in the long run. Accordingly I give reduced weight to the future, much beyond tau, for better or worse, because I do not trust my assessment of either.
But is that an adequate response to human fallibility? My intuition is that one has to back it up with an extra rule: if my moral calculations suggest culling humans, its time to give up, go back to painting kitsch water colours and leave politics to the sane. That’s my interpretation of dspeyer’s phrase “my moral intuition is throwing error codes.” Now I have two rules, so Sod’s Law tells me that some day they are going to conflict.
Eliever’s post made an ontological claim, that a universe with only two kinds of things, physics and logic, has room for morality. It strikes me that I’ve made no dent in that claim. All I’ve managed to argue is that it all adds up to normality: we cannot see the future, so we do not know what to do for the best. Panic and tragic blunders ensue, as usual.
I interpreted Eliever’s questions as a response to the evocative phrase “my moral intuition is throwing error codes.” What does it actually mean? Can it be grounded in an actual situation?
Grounding it in an actual situation introduces complications. Given a real life moral dilemma it is always a good idea to look for a third option. But exploring those additional options doesn’t help us understand the computer programming metaphor of moral intuitions throwing error codes
My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn’t happen,… If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.
Establish free mother-and-baby clinics
Provide free food for the under fives
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.
Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on “aid as sabotage”, and the creation of permanent traps.
My original comment was a disaster, but since I’m pig-headed I’m going to have another go at saying what it might mean for ones moral intuitions to throw error codes:
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
Really? That’s your plan for “maximum holocaust”? You’ll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you’ll do nothing but good.
This sounds to me like a political applause light, especially
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
In essence, your statement boils down to “if I wanted to do the most possible harm, I would do what the Enemy are doing!” which is clearly a mindkilling political appeal.
(For reference, here’s my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
I’m afraid Franken Fran beat you to this story a while ago.
Hopefully this comment was intended as non-obvious form of satire, otherwise it’s completely nonsensical.
You’re—Mr. AlanCrowe that is—mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:
“Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”
You’re forgetting the “teach a man to fish” part entirely. Which should be enough—given the context—to explain what’s wrong with your reasoning. I could go on explaining further, but I don’t want to talk about such heinous acts, the ones you mentioned, unecessarily.
EDIT:
Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn’t quite that skindeep.
In anycase I think it’s extremely insensitive and rash to poorly excuse yourself of atrocities like these:
It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise.
In anycase you falsely created a polarity between different attempts of optimizing charity here:
A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen).
And then by means of trickery. you transformed it into “being unsympathetic now” + “sympathetic later” > “sympathetic now” > “more to be sympathetic about later”
However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later—nothing is gained.
Which to me looks like you’re just eager to throw sympathy out the windowin hopes of looking very rational in contrast. And with this false trickery you’ve made it look like these suffering people deserve what they get and there’s nothing you can do about it. You could also accompany options A and B with option C “Save as many children as possible and fight harder to raise money for schools and infrastructure as well” not to mention that you can give food to people who are building those schools and it’s not a zero-sum game.
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
I would be very happy that Dr. Evil appears to be maximally incompetent.
Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?
Haiti today is a situation that makes my moral intuition throw error codes. Population density is three times that of Cuba. Should we be sending aid? It would be kinder to send helicopter gunships and carry out a cull. Cut the population back to one tenth of its current level, then build paradise. My rival moral intuition is that culling humans is always wrong.
Trying to stay concrete and present, should I restrict my charitable giving to helping countries make the demographic transition? Within a fixed aid budget one can choose package A = (save one child, provide education, provide entry into global economy; 30 years later the child, now an adult, feeds his own family and has some money left over to help others) package B = (save four children; that’s it, money all used up, thirty years later there are 16 children needing saving and its not going to happen). Concrete choice of A over B: ignore Haiti and send money to Karuna trust to fund education for untouchables in India, preferring to raise a few children out of poverty by letting other children die.
It’s also about half that of Taiwan, significantly less than South Korea or the Netherlands, and just above Belgium, Israel, and Japan—as well as very nearly on par with India, the country you’re using as an alternative! I suspect your source may have overweighted population density as a factor in poor social outcomes.
I don’t see how these two frameworks are appealing to different terminal values—they seem to be arguments about which policies maximize consequential lives-saved over time, or maximize QALYs (Quality-Adjusted Life Years) over time. This seem like a surprisingly neat and lovely illustration of “disagreeing moral axioms” that turn out to be about instrumental policies without much in the way of differing terminal values, hence a dispute of fact with a true-or-false answer under a correspondence theory of truth for physical-universe hypotheses.
ISTM he’s not quite sure whether one QALY thirty years from now should be worth as much as one QALY now.
I think that is it, I’m trying to do utilitarianism. I’ve got some notion q of quality and quantity of life. It varies through time. How do I assess a long term policy, with short term sacrifices for better output in the long run? I integrate over time with a suitable weighting such as
e%5E{-\frac{t}{\tau}}%20dt)What is the significance of the time constant tau? I see it as mainly a humility factor, because I cannot actually see into the future and know how things will turn out in the long run. Accordingly I give reduced weight to the future, much beyond tau, for better or worse, because I do not trust my assessment of either.
But is that an adequate response to human fallibility? My intuition is that one has to back it up with an extra rule: if my moral calculations suggest culling humans, its time to give up, go back to painting kitsch water colours and leave politics to the sane. That’s my interpretation of dspeyer’s phrase “my moral intuition is throwing error codes.” Now I have two rules, so Sod’s Law tells me that some day they are going to conflict.
Eliever’s post made an ontological claim, that a universe with only two kinds of things, physics and logic, has room for morality. It strikes me that I’ve made no dent in that claim. All I’ve managed to argue is that it all adds up to normality: we cannot see the future, so we do not know what to do for the best. Panic and tragic blunders ensue, as usual.
Is permitting or perhaps even helping Haitians to emigrate to other countries anywhere in the moral calculus?
I interpreted Eliever’s questions as a response to the evocative phrase “my moral intuition is throwing error codes.” What does it actually mean? Can it be grounded in an actual situation?
Grounding it in an actual situation introduces complications. Given a real life moral dilemma it is always a good idea to look for a third option. But exploring those additional options doesn’t help us understand the computer programming metaphor of moral intuitions throwing error codes
So you’re facing a moral dilemma between giving to charity and murdering nine million people? I think I know what the problem might be.
My original draft contained a long ramble about permanent Malthusian immiseration. History is a bit of a race. Can society progress fast enough to reach the demographic transition? Or does population growth redistribute all the gains in GDP so that individuals get poorer, life gets harder, the demographic transition doesn’t happen,… If I were totally evil and wanted to fuck over as many people as a could, as hard as a I could, my strategy for maximum holocaust is as follows.
Establish free mother-and-baby clinics
Provide free food for the under fives
Leverage the positive reputation from the first two to promote religions that oppose contraception
Leverage religious faith to get contraception legally prohibited
If I can get population growth to out run technological gains in productivity I can engineer a Limits to growth style crash. That will be vastly worse than any wickedness that I could be work by directly harming people.
Unfortunately, I had been reading various articles discussing the 40th Anniversary of the publication of the Limits to Growth book. So I deleted the set up for the moral dilemma from my comment, thinking that my readers will be over-familiar with concerns about permanent Malthusian immiseration, and pick up immediately on “aid as sabotage”, and the creation of permanent traps.
My original comment was a disaster, but since I’m pig-headed I’m going to have another go at saying what it might mean for ones moral intuitions to throw error codes:
Imagine that you (a good person) have volunteered to help out in sub-Saharan Africa, distributing free food to the under fives :-) One day you find out who is paying for the food. Dr Evil is paying; it is part of his plan for maximum holocaust...
Really? That’s your plan for “maximum holocaust”? You’ll do more good than harm in the short run, and if you run out of capital (not hard with such a wastefully expensive plan) then you’ll do nothing but good.
This sounds to me like a political applause light, especially
In essence, your statement boils down to “if I wanted to do the most possible harm, I would do what the Enemy are doing!” which is clearly a mindkilling political appeal.
(For reference, here’s my plan for maximum holocaust: select the worst things going on in the world today. Multiply their evil by their likelihoods of success. Found a terrorist group attacking the winners. Be careful to kill lots of civilians without actually stopping your target.)
I’m afraid Franken Fran beat you to this story a while ago.
Hopefully this comment was intended as non-obvious form of satire, otherwise it’s completely nonsensical.
You’re—Mr. AlanCrowe that is—mixing up aid that prevents temporary suffering to lack of proper longterm solutions. As the saying goes:
“Give a man a fish and you feed him for a day. Teach a man to fish and you feed him for a lifetime.”
You’re forgetting the “teach a man to fish” part entirely. Which should be enough—given the context—to explain what’s wrong with your reasoning. I could go on explaining further, but I don’t want to talk about such heinous acts, the ones you mentioned, unecessarily.
EDIT: Alright sorry I overlooked the type of your mistake slightly because I had an answer ready and recognized a pattern so your mistake wasn’t quite that skindeep.
In anycase I think it’s extremely insensitive and rash to poorly excuse yourself of atrocities like these:
In anycase you falsely created a polarity between different attempts of optimizing charity here:
And then by means of trickery. you transformed it into “being unsympathetic now” + “sympathetic later” > “sympathetic now” > “more to be sympathetic about later”
However in the really real world each unnecessary death prevented counts, each starving child counts, at least in my book. If someone suffers right now in exchange for someone else not suffering later—nothing is gained.
Which to me looks like you’re just eager to throw sympathy out the window in hopes of looking very rational in contrast. And with this false trickery you’ve made it look like these suffering people deserve what they get and there’s nothing you can do about it. You could also accompany options A and B with option C “Save as many children as possible and fight harder to raise money for schools and infrastructure as well” not to mention that you can give food to people who are building those schools and it’s not a zero-sum game.
I would be very happy that Dr. Evil appears to be maximally incompetent.
Seriously, why are you basing your analysis on a 40 year old book whose predictions have failed to come true?
(Are you sure you want this posted under what appears to be a real name?)
Don’t be absurd. How could advocating population control via shotgun harm one’s reputation?
When should seek the protection of anonymity? Where do I draw the line? On which side do pro-bestiality comments fall?