You’re confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
The argument that you shouldn’t build a fallout shelter because the life you’d live after civilization was destroyed wouldn’t really be worth living is contrary to what we know about happiness. It is a highly-suspect argument for other reasons as well.
You’re confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
I consider both those arguments relevant to this post. What I’m saying is that building fallout shelters is unlikely to be optimal for personal safety because there is generally much lower-hanging fruit. Further, in the event that building fallout shelters is optimal for personal safety, your efforts would be likely better spent elsewhere because pursuing personal-level solutions for society-level hazards is highly inefficient.
I omitted the obvious third argument against fallout shelters (that they increase the odds of nuclear war, albeit only slightly) because I evaluated it as likely to make people think that this post was actually about fallout shelters.
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
I’m not sure that that’s reasonable to say. As I pointed out, personal safety is personal, and thus your decision to build a fallout shelter is subject to a wide range of confounding factors. I believe that it is likely that most people who built fallout shelters could have purchased expected years of survival for cheaper, even on a personal level. Typically fallout shelters seem extremely unlikely to actually be the lowest-hanging fruit in someone’s life.
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
I assumed, perhaps wrongly, that that was a given on this site, given previousdiscussionshere. There’s probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I’d like to think that we’re better than that.
The argument that you shouldn’t build a fallout shelter because the life you’d live after civilization was destroyed wouldn’t really be worth living is contrary to what we know about happiness. It is a highly-suspect argument for other reasons as well.
I’m aware of the studies and arguments used to claim that happiness will reset regardless of what happens to you, but I think that a full-scale nuclear war falls outside the outside view’s domain.
I wasn’t one of the downvoters, but I’ll hazard a guess.
pursuing personal-level solutions for society-level hazards is highly inefficient.
Viscerally for me, this immediately flags as not being right. I might not understand what you mean by that statement though. It’s very difficult to make an impact on the probability of society-level hazards occuring, one way or the other, so if you think there’s a non-trivial chance of one of them occuring a personal-level solution seems like the obvious choice.
I assumed, perhaps wrongly, that that was a given on this site, given previous discussions here. There’s probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I’d like to think that we’re better than that.
I think you’re significantly overestimating the uniformity of LW readers. The high-impact posters seem to have similar ethical views but I imagine most of the readers arrive here through an interest in transhumanism. On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn’t include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like. I certainly am. I care on an abstract, intellectual level, but it’s very very difficult for me to be emotionally impacted by possible futures that don’t include me. I think a lot of people downvote when you make assumpions about them (that turn out to be incorrect).
That being said, I don’t have a problem with anything you wrote.
Viscerally for me, this immediately flags as not being right. I might not understand what you mean by that statement though. It’s very difficult to make an impact on the probability of society-level hazards occuring, one way or the other, so if you think there’s a non-trivial chance of one of them occuring a personal-level solution seems like the obvious choice.
What I am trying to say is that preparing personal defenses for society-level issues is very expensive per expected lifespan gained/dollar relative to preparing personal defenses for personal-level issues. Further, it is possible to actually remove the harm from many personal-level issues completely through personal precautions, while the same is not really likely for societal-level issues.
If you learn a better way of running and don’t injure your knees, the knee injuries never happen. If you build a bomb shelter and are in your shelter when the nuclear war happens and the shelter holds up and you have sufficient supplies to wait out the radiation, society is still essentially destroyed, you just happened to live through it. Most, if not all, of the overall harm has not been mitigated.
I also think the difficulty of making an impact on the probability of society-level hazards occurring is overestimated by most, but that’s a separate issue.
On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn’t include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like… I think a lot of people downvote when you make assumpions about them (that turn out to be incorrect).
I hope that you are wrong here, but it seems quite plausible that you are right.
The high-impact posters seem to have similar ethical views but I imagine most of the readers arrive here through an interest in transhumanism. On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn’t include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like. I certainly am. I care on an abstract, intellectual level, but it’s very very difficult for me to be emotionally impacted by possible futures that don’t include me.
Really? Hmm. That seems like a problem we should be fixing.
You might want to work harder on distinguishing between what is moral and what is best for the individual’s happiness.
EDIT: Actually, you did so perfectly well. PhilGoetz appears to be arguing against helping other people, without providing any arguments for this position. Strange.
You’re confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
[...]
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
Downvoted for this. If you think that “screw humanity, I want to live!” is moral, then I would love to see you defend that claim.
EDIT:
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
… is also left undefended.
When taking personal precautionary measures, worrying about such catastrophes is generally silly, especially given the risks we all take on a regular basis—risks that, in most cases, are much easier to avoid than nuclear wars. Societal disasters are generally extremely expensive for the individual to protect against, and carry a large amount of disutility even if protections succeed.
To make matters worse, if there’s a nuclear war tomorrow and your house is hit directly, you’ll be just as dead as if you fall off your bike and break your neck. Dying in a more dramatic fashion does not, generally speaking, produce more disutility than dying in a mundane fashion does. In other words, when optimizing for personal safety, focus on accidents, not nuclear wars; buy a bike helmet, not a fallout shelter.
Unless he is a psychopath, PG attaches utility to other people not dying horribly with extremely high probability. The same is true of most (all?) LW members.
If he is, in fact, a psychopath, then what is “selfishly moral” for him is irrelevant to what most Lesswrongers are trying to maximize. If he wishes to claim that it is not, then I would like to see some damn evidence.
I understood “screw humanity, I want to live” not to mean “no preference regarding others ‘dying horribly’”, but to mean “preferences regarding humanity in general outweighed by preference for one’s own survival”.
I, for one, would choose the survival of my family unit over that of arbitrarily many other humans, no matter the details of their demise.
It’s you, isn’t it! From that argument about parents and children! Are you going to bring this up every time I talk about morality?
Psychopaths, obviously, don’t care about their family unit on an emotional level, so no. It does, however, make you hopelessly biased in my eyes. You already know this.
However, I’m not sure I believe you.
Let’s say your family lives on a spaceship, Lost In Space style. You encounter Galactus, the world-eating space monster, and discover to your horror that he’s on a direct course for Earth! However, your ship is powerful enough to ram him, destroying you both. Would you choose to abandon Earth—which is, of course, filled with children, lovers, people—and fly off into the night? Or would you tell the children to close their eyes, hug your [insert correct gendered spouse here], grit your teeth, and …
Hold that thought.
I would like to see you write a top-level post defending that position. If you believe that most of LW is irrational on this topic—saving the world—then it seems that you should be fixing that. If, on the other hand, you believe that I am unusually irrational in this regard, you will doubtless get lots of tasty karma for your trouble.
Full disclosure: I intend to post on this topic myself.
First off, regarding your hypothetical, it would be no contest. Replacing earth with a box of ice cream would have about the same decision time. You could frame it as a more active choice, akin to the trolley problem—have me convince Galactus to change course towards Earth—I wouldn’t mind.
Now where you go wrong is assuming that somehow implies that I do not value the lives of my fellow human beings, or of mankind overall. On the contrary, I am quite concerned about x-risk, and I would be too if there were no family to be affected. It is just not the priority goal.
Consider you had a choice between the life of a non-human primate, xor that of a human. Just because you (hypothetically) quickly decide to save the human does not mean you do not value the non-human primate, or that without another high-priority preference involved, you would not invest a lot into saving that animal. Do you see the point?
If you believe that most of LW is irrational on this topic—saving the world—then it seems that you should be fixing that.
No, why would you think that? I do share that value, and I obviously would as a derived value even if I only did care about my family (repercussions). But even without such considerations, I’d care about it. I’d just accept no tradeoff whatsoever compromising between other humans and “my” humans. What’s to defend about that? As I wrote elsewhere, it’s not a “bias” in any sense of the word as it’s used around here.
Lastly, again with that curious choice of calling preferences “irrational”. As if we could just argue towards which preferences to rationally choose. Which ice cream flavor we really should enjoy the most.
I just want to ask… Is this really your preferences? You’d commit genocide to save your family? That seems atrociously evil. How do you morally justify that to yourself? (Not a rhetorical question, I’d like to know the answer.)
...
Really? You wouldn’t trade the lives of your family for the lives of billions? I have trouble getting my mind around the implications of something that, that, that… I don’t have a word for it.
How do you morally justify that to yourself? (Not a rhetorical question, I’d like to know the answer.)
What do you expect such an answer to look like?
Put a different way: how would you respond to the equivalent question? (“Do you really have the opposite preference? You’d kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?”)
My preferences are more like yours than Kawoomba’s here, but I am not sure the kind of moral justification you’re asking for is anything other than a rhetorical way of claiming the primacy of our values over theirs.
No… Although I did see it could be read that way, so I added the disclaimer. I do admit that the disclaimer does not add much as there was no cost to me to write it. I’m sorry if I sounded that way.
(“Do you really have the opposite preference? You’d kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?”)
I will attempt show my thought processes on this the best I can. An answer like this is what my question was trying to get. Yes, I understand that drawing the line is fuzzy, but it can be good to get a somewhat deeper look.
Think of the people of the world. Think of all the things people go around doing in day to day life. The families, the enjoyment people get. I am sure that this is something you value. Of course, you might have a higher weighting of the moral value of this for certain groups rather than others, like perhaps your family. But to have a weighting that much higher on your family members would have certain implications. If you had a weighting high enough to make you commit genocide rather than have your family die, that weighting must be very high, more than a billion to one. (Of course this depends on the size of your family. If you consider half the planet your family, we are discussing something else entirely.)
Lets repeat that for emphasis. 1000000000:1 ratio. What does that actually mean? it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye. This is something very much at odds with the normal human perception of morality. That is, while it may be self consistent, it absolutely contradicts what we normally consider morality. This is a strong indicator (though not definite of course) that something fishy is going on with that argument.
(There are some more points to be said, but this post is long enough already. For example, why do I assume that you can scale things this way? In other words why is scope insensitivity bad? If you want to talk about that more I will, but that is not the point of my comment.)
So basically, what I was asking might be better be written this way: Given the vastly different moral point of view you get from such a system of ethics, how do you justify it? That is to say, you do need to be able to come up with some other factor explaining how your system does fit in with our moral intuitions, and I genuinely can not think of such an explanation.
it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye.
For five years of torture, I’d estimate that as 34 trillion times worse, assuming a perception takes about 100 msec and a human can register 20 logarithmic degrees of discomfort.
Thank you for FINALLY calculating that number. It’s very likely off by a few orders of magnitude due to the 20-logarithmic-degrees part (our hearing ranges more widely than this, I think) but at least you tried to bloody calculate it.
Here is a relevant paper which lets one estimate the number of bits sufficient to encode pain, by dividing the top firing rate by the baseline firing rate variability of a nociceptor and taking base 2 logarithm (the paper does not do it, but the data is there). My quick guess is that it’s at most a few bits (4 to 6), not 20, which is much less sensitive than hearing.
I didn’t suggest 20 bits; I suggested 20 distinguishable degrees of discomfort. Medical diagnosis sometimes uses ten, or is that six? which I thought was wrong at the low end — a dust speck is much less discomfort than anyone goes to the doctor for. 4 to 6 bits could encode 16 to 64 degrees of discomfort. I did presume that discomfort is logarithmic (since other senses are), and I conflated pain with irritation, which are not really subjectively the same.
If your point is that perceived pain is aggregated, you are right, of course. The above analysis is misguided, one should really look at the brain structures that make us perceive torture pain as a long-lasting unpleasant experience. A quick search suggests that the region of the brain primarily responsible for the unpleasantness of pain (as opposed to its perception) is the nociceptive area (area 24) of the Anterior cingulate cortex. I could not find, however, a reasonable way to calculate the dynamic range of the pain affect beyond the usual 10-level scale self-assessment.
It’s not obvious that disutility would scale linearly with amount of torture; would you be indifferent between a 100% chance of getting a dust speck in your eye and a 1 in 34 trillion chance of being tortured for five years?
(My intuition probably doesn’t work right with such small numbers, so I don’t know myself.)
Thanks for pointing that out. That comment that you linked to seems a valuable post in the discussion of torture verses dust specs. I just used torture versus dust specks in my comment for familiarity value. To consider the question more formally, of course, you need to find two things, one trivial and one major, that the ratio of badness is exactly 1 to a billion. The exact details do not exactly matter to my point, but you are right that the example I gave is not technically accurate.
If I’ve followed your thought process correctly, you justify your moral intuitions because they are shared by most other humans, and since Kawoomba’s intuitions aren’t so popular, they require some other justification.
Yes?
Fair enough; that answers my question. Thanks.
For my own part, I think that’s not much of a justification, but then I don’t think that justifying moral intuitions is a particularly valuable exercise. They are what they are. If my moral intuitions are shared by a more powerful and influential group than yours, then our society will reflect my moral intuitions and not yours. For me to then demand that you explain how your moral intuitions “fit in” with mine makes about as much sense as demanding that a Swahili speaker explain how their grammatical intuitions “fit in” with mine.
Indeed. You summarized my point far more effectively then I did. Thank you. I was a bit unclear about what I was saying. You are right about it not being much of a justification, but that is basically the only type of moral justification possible. But I get your point about it not being a very productive task to try to give moral justifications.
Doesn’t follow, you don’t need to grade linearly, i.e. you can consider avoiding corporeal or mental damage / anguish above a certain threshold exponentially more important than avoiding dust specks.
Think of an AI taking care of a nuclear power plant, consider it has a priority system: “Core temperature critical? If yes, always prioritize this. Else: Remote control cleaner bots to clean the facility. Else: (...)” Or a process throwing an exception which gets priority-handled.
But even without such considerations, I’d care about it. I’d just accept no tradeoff whatsoever compromising between other humans and “my” humans.
Oog. See, this is why I’m so terrified of the prospect of becoming a parent (and the idea that other humans can easily become parents). I don’t know if I can trust anybody with the power to instill this kind of loyalty in anybody else.
How about this? I realize a lot of the points stretch credulity, but I think you should be able to imagine the situation.
Your family member requires a kidney transplant or they will die in 6 months. With the transplant, you can expect they will live an average of 10 additional days. Normal channels of obtaining one have completely failed. By some happenstance, you know of a 25-year-old pretty average-seeming woman who is a signed-up donor (you are not personally acquainted with her), and happen to know that if she dies, your family member will receive the transplant. Do you kill her and make it look like an accident in order to get the transplant, given that you know you would definitely for sure get away with it?
Only 10 additional days? I’m sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.
In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant, their death as painless as possible, and perhaps investing into cryonics so that they may be revived at a later date.
A great deal of this decision is inspired by reading the Wall Street Journal article Why Doctors Die Differently which states that the majority of healthcare professionals seem to prefer dieing peacefully at home rather than undergo risky life extending treatments.
While I doubt a family member dying at home from a kidney disease would count as ‘peaceful’ in most definitions of the word, undergoing invasive surgery in an attempt to gain a few extra days simply isn’t worth it from a quality of life standpoint.
I take your point that you could argue that the ten days would produce disutility or at least very little utility, however the point is to answer the question in the least possible world—where the ten days actually are about as good as regular days. If you’re having trouble imagining that, make it twenty or thirty days, or whatever you think would be equivalent to ten regular days.
To say nothing of the monetary expenses and moral problems that would result from committing murder.
Well, the whole point is that the revealed preferences from Kawoomba’s post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he’s acting under extreme scope insensitivity or some other confounding factor.
Well, the whole point is that the revealed preferences from Kawoomba’s post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he’s acting under extreme scope insensitivity or some other confounding factor.
You’re confusing two messages. One is that building a fallout shelter is not a good way to optimize personal safety. The other is that optimizing society safety is, for some unspecified reason, more-virtuous than optimizing personal safety.
The first point is historically wrong. In the time when people in the US built fallout shelters, most people who built them thought it was more likely than not that there would be a nuclear war soon. They made the correct calculation given this assumption.
The second point is simply a referral back to a set of presumptions about ethics (selfishness is bad) that should themselves be argued over, rather than the examples here.
The argument that you shouldn’t build a fallout shelter because the life you’d live after civilization was destroyed wouldn’t really be worth living is contrary to what we know about happiness. It is a highly-suspect argument for other reasons as well.
I consider both those arguments relevant to this post. What I’m saying is that building fallout shelters is unlikely to be optimal for personal safety because there is generally much lower-hanging fruit. Further, in the event that building fallout shelters is optimal for personal safety, your efforts would be likely better spent elsewhere because pursuing personal-level solutions for society-level hazards is highly inefficient.
I omitted the obvious third argument against fallout shelters (that they increase the odds of nuclear war, albeit only slightly) because I evaluated it as likely to make people think that this post was actually about fallout shelters.
I’m not sure that that’s reasonable to say. As I pointed out, personal safety is personal, and thus your decision to build a fallout shelter is subject to a wide range of confounding factors. I believe that it is likely that most people who built fallout shelters could have purchased expected years of survival for cheaper, even on a personal level. Typically fallout shelters seem extremely unlikely to actually be the lowest-hanging fruit in someone’s life.
I assumed, perhaps wrongly, that that was a given on this site, given previous discussions here. There’s probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I’d like to think that we’re better than that.
I’m aware of the studies and arguments used to claim that happiness will reset regardless of what happens to you, but I think that a full-scale nuclear war falls outside the outside view’s domain.
Can I get an explanation for the downvotes here?
I wasn’t one of the downvoters, but I’ll hazard a guess.
pursuing personal-level solutions for society-level hazards is highly inefficient.
Viscerally for me, this immediately flags as not being right. I might not understand what you mean by that statement though. It’s very difficult to make an impact on the probability of society-level hazards occuring, one way or the other, so if you think there’s a non-trivial chance of one of them occuring a personal-level solution seems like the obvious choice.
I assumed, perhaps wrongly, that that was a given on this site, given previous discussions here. There’s probably an argument to be made that all such actions are merely purchasing fuzzies and that protecting yourself is purchasing utilons, but I’d like to think that we’re better than that.
I think you’re significantly overestimating the uniformity of LW readers. The high-impact posters seem to have similar ethical views but I imagine most of the readers arrive here through an interest in transhumanism. On the scale from pathological philanthropists to being indifferent to the whole world burning if it doesn’t include you subjectively experiencing it I bet the average reader is a lot closer to the latter than you would like. I certainly am. I care on an abstract, intellectual level, but it’s very very difficult for me to be emotionally impacted by possible futures that don’t include me. I think a lot of people downvote when you make assumpions about them (that turn out to be incorrect).
That being said, I don’t have a problem with anything you wrote.
Thanks for the reply!
What I am trying to say is that preparing personal defenses for society-level issues is very expensive per expected lifespan gained/dollar relative to preparing personal defenses for personal-level issues. Further, it is possible to actually remove the harm from many personal-level issues completely through personal precautions, while the same is not really likely for societal-level issues.
If you learn a better way of running and don’t injure your knees, the knee injuries never happen. If you build a bomb shelter and are in your shelter when the nuclear war happens and the shelter holds up and you have sufficient supplies to wait out the radiation, society is still essentially destroyed, you just happened to live through it. Most, if not all, of the overall harm has not been mitigated.
I also think the difficulty of making an impact on the probability of society-level hazards occurring is overestimated by most, but that’s a separate issue.
I hope that you are wrong here, but it seems quite plausible that you are right.
Really? Hmm. That seems like a problem we should be fixing.
You might want to work harder on distinguishing between what is moral and what is best for the individual’s happiness.
EDIT: Actually, you did so perfectly well. PhilGoetz appears to be arguing against helping other people, without providing any arguments for this position. Strange.
Downvoted for this. If you think that “screw humanity, I want to live!” is moral, then I would love to see you defend that claim.
EDIT:
… is also left undefended.
Why would anyone need to defend moral claims, and to whom?
Unless he is a psychopath, PG attaches utility to other people not dying horribly with extremely high probability. The same is true of most (all?) LW members.
If he is, in fact, a psychopath, then what is “selfishly moral” for him is irrelevant to what most Lesswrongers are trying to maximize. If he wishes to claim that it is not, then I would like to see some damn evidence.
I understood “screw humanity, I want to live” not to mean “no preference regarding others ‘dying horribly’”, but to mean “preferences regarding humanity in general outweighed by preference for one’s own survival”.
I, for one, would choose the survival of my family unit over that of arbitrarily many other humans, no matter the details of their demise.
Does that make me a psychopath in your eyes?
Wait a minute, I know that example …
It’s you, isn’t it! From that argument about parents and children! Are you going to bring this up every time I talk about morality?
Psychopaths, obviously, don’t care about their family unit on an emotional level, so no. It does, however, make you hopelessly biased in my eyes. You already know this.
However, I’m not sure I believe you.
Let’s say your family lives on a spaceship, Lost In Space style. You encounter Galactus, the world-eating space monster, and discover to your horror that he’s on a direct course for Earth! However, your ship is powerful enough to ram him, destroying you both. Would you choose to abandon Earth—which is, of course, filled with children, lovers, people—and fly off into the night? Or would you tell the children to close their eyes, hug your [insert correct gendered spouse here], grit your teeth, and …
Hold that thought.
I would like to see you write a top-level post defending that position. If you believe that most of LW is irrational on this topic—saving the world—then it seems that you should be fixing that. If, on the other hand, you believe that I am unusually irrational in this regard, you will doubtless get lots of tasty karma for your trouble.
Full disclosure: I intend to post on this topic myself.
First off, regarding your hypothetical, it would be no contest. Replacing earth with a box of ice cream would have about the same decision time. You could frame it as a more active choice, akin to the trolley problem—have me convince Galactus to change course towards Earth—I wouldn’t mind.
Now where you go wrong is assuming that somehow implies that I do not value the lives of my fellow human beings, or of mankind overall. On the contrary, I am quite concerned about x-risk, and I would be too if there were no family to be affected. It is just not the priority goal.
Consider you had a choice between the life of a non-human primate, xor that of a human. Just because you (hypothetically) quickly decide to save the human does not mean you do not value the non-human primate, or that without another high-priority preference involved, you would not invest a lot into saving that animal. Do you see the point?
No, why would you think that? I do share that value, and I obviously would as a derived value even if I only did care about my family (repercussions). But even without such considerations, I’d care about it. I’d just accept no tradeoff whatsoever compromising between other humans and “my” humans. What’s to defend about that? As I wrote elsewhere, it’s not a “bias” in any sense of the word as it’s used around here.
Lastly, again with that curious choice of calling preferences “irrational”. As if we could just argue towards which preferences to rationally choose. Which ice cream flavor we really should enjoy the most.
I just want to ask… Is this really your preferences? You’d commit genocide to save your family? That seems atrociously evil. How do you morally justify that to yourself? (Not a rhetorical question, I’d like to know the answer.)
...
Really? You wouldn’t trade the lives of your family for the lives of billions? I have trouble getting my mind around the implications of something that, that, that… I don’t have a word for it.
What do you expect such an answer to look like?
Put a different way: how would you respond to the equivalent question? (“Do you really have the opposite preference? You’d kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?”)
My preferences are more like yours than Kawoomba’s here, but I am not sure the kind of moral justification you’re asking for is anything other than a rhetorical way of claiming the primacy of our values over theirs.
No… Although I did see it could be read that way, so I added the disclaimer. I do admit that the disclaimer does not add much as there was no cost to me to write it. I’m sorry if I sounded that way.
I will attempt show my thought processes on this the best I can. An answer like this is what my question was trying to get. Yes, I understand that drawing the line is fuzzy, but it can be good to get a somewhat deeper look.
Think of the people of the world. Think of all the things people go around doing in day to day life. The families, the enjoyment people get. I am sure that this is something you value. Of course, you might have a higher weighting of the moral value of this for certain groups rather than others, like perhaps your family. But to have a weighting that much higher on your family members would have certain implications. If you had a weighting high enough to make you commit genocide rather than have your family die, that weighting must be very high, more than a billion to one. (Of course this depends on the size of your family. If you consider half the planet your family, we are discussing something else entirely.)
Lets repeat that for emphasis. 1000000000:1 ratio. What does that actually mean? it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye. This is something very much at odds with the normal human perception of morality. That is, while it may be self consistent, it absolutely contradicts what we normally consider morality. This is a strong indicator (though not definite of course) that something fishy is going on with that argument.
(There are some more points to be said, but this post is long enough already. For example, why do I assume that you can scale things this way? In other words why is scope insensitivity bad? If you want to talk about that more I will, but that is not the point of my comment.)
So basically, what I was asking might be better be written this way: Given the vastly different moral point of view you get from such a system of ethics, how do you justify it? That is to say, you do need to be able to come up with some other factor explaining how your system does fit in with our moral intuitions, and I genuinely can not think of such an explanation.
For five years of torture, I’d estimate that as 34 trillion times worse, assuming a perception takes about 100 msec and a human can register 20 logarithmic degrees of discomfort.
Thank you for FINALLY calculating that number. It’s very likely off by a few orders of magnitude due to the 20-logarithmic-degrees part (our hearing ranges more widely than this, I think) but at least you tried to bloody calculate it.
Here is a relevant paper which lets one estimate the number of bits sufficient to encode pain, by dividing the top firing rate by the baseline firing rate variability of a nociceptor and taking base 2 logarithm (the paper does not do it, but the data is there). My quick guess is that it’s at most a few bits (4 to 6), not 20, which is much less sensitive than hearing.
I didn’t suggest 20 bits; I suggested 20 distinguishable degrees of discomfort. Medical diagnosis sometimes uses ten, or is that six? which I thought was wrong at the low end — a dust speck is much less discomfort than anyone goes to the doctor for. 4 to 6 bits could encode 16 to 64 degrees of discomfort. I did presume that discomfort is logarithmic (since other senses are), and I conflated pain with irritation, which are not really subjectively the same.
I suppose humans have more than one nociceptor each? ;-)
If your point is that perceived pain is aggregated, you are right, of course. The above analysis is misguided, one should really look at the brain structures that make us perceive torture pain as a long-lasting unpleasant experience. A quick search suggests that the region of the brain primarily responsible for the unpleasantness of pain (as opposed to its perception) is the nociceptive area (area 24) of the Anterior cingulate cortex. I could not find, however, a reasonable way to calculate the dynamic range of the pain affect beyond the usual 10-level scale self-assessment.
It’s not obvious that disutility would scale linearly with amount of torture; would you be indifferent between a 100% chance of getting a dust speck in your eye and a 1 in 34 trillion chance of being tortured for five years?
(My intuition probably doesn’t work right with such small numbers, so I don’t know myself.)
Thanks for pointing that out. That comment that you linked to seems a valuable post in the discussion of torture verses dust specs. I just used torture versus dust specks in my comment for familiarity value. To consider the question more formally, of course, you need to find two things, one trivial and one major, that the ratio of badness is exactly 1 to a billion. The exact details do not exactly matter to my point, but you are right that the example I gave is not technically accurate.
If I’ve followed your thought process correctly, you justify your moral intuitions because they are shared by most other humans, and since Kawoomba’s intuitions aren’t so popular, they require some other justification.
Yes?
Fair enough; that answers my question. Thanks.
For my own part, I think that’s not much of a justification, but then I don’t think that justifying moral intuitions is a particularly valuable exercise. They are what they are. If my moral intuitions are shared by a more powerful and influential group than yours, then our society will reflect my moral intuitions and not yours. For me to then demand that you explain how your moral intuitions “fit in” with mine makes about as much sense as demanding that a Swahili speaker explain how their grammatical intuitions “fit in” with mine.
Indeed. You summarized my point far more effectively then I did. Thank you. I was a bit unclear about what I was saying. You are right about it not being much of a justification, but that is basically the only type of moral justification possible. But I get your point about it not being a very productive task to try to give moral justifications.
Doesn’t follow, you don’t need to grade linearly, i.e. you can consider avoiding corporeal or mental damage / anguish above a certain threshold exponentially more important than avoiding dust specks.
Think of an AI taking care of a nuclear power plant, consider it has a priority system: “Core temperature critical? If yes, always prioritize this. Else: Remote control cleaner bots to clean the facility. Else: (...)” Or a process throwing an exception which gets priority-handled.
Yes, really. I’m surprised about your surprise.
Oog. See, this is why I’m so terrified of the prospect of becoming a parent (and the idea that other humans can easily become parents). I don’t know if I can trust anybody with the power to instill this kind of loyalty in anybody else.
I know what you mean, it was on my mind when writing this comment.
More to the point, parents as paperclippers.
How about this? I realize a lot of the points stretch credulity, but I think you should be able to imagine the situation.
Your family member requires a kidney transplant or they will die in 6 months. With the transplant, you can expect they will live an average of 10 additional days. Normal channels of obtaining one have completely failed. By some happenstance, you know of a 25-year-old pretty average-seeming woman who is a signed-up donor (you are not personally acquainted with her), and happen to know that if she dies, your family member will receive the transplant. Do you kill her and make it look like an accident in order to get the transplant, given that you know you would definitely for sure get away with it?
Only 10 additional days? I’m sorry but the expected utility in quality of life is far too low to make an investment. Undertaking a kidney transplant (of any kind) will result in a great deal of pain for my loved one and the time spend in preparations, surgery, and recovery would consume most of the 10 additional days gained by the surgery. To say nothing of the monetary expenses and moral problems that would result from committing murder.
In such a scenario, I would be much better off investing my resources into making my loved ones remaining days pleasant, their death as painless as possible, and perhaps investing into cryonics so that they may be revived at a later date.
A great deal of this decision is inspired by reading the Wall Street Journal article Why Doctors Die Differently which states that the majority of healthcare professionals seem to prefer dieing peacefully at home rather than undergo risky life extending treatments.
While I doubt a family member dying at home from a kidney disease would count as ‘peaceful’ in most definitions of the word, undergoing invasive surgery in an attempt to gain a few extra days simply isn’t worth it from a quality of life standpoint.
I take your point that you could argue that the ten days would produce disutility or at least very little utility, however the point is to answer the question in the least possible world—where the ten days actually are about as good as regular days. If you’re having trouble imagining that, make it twenty or thirty days, or whatever you think would be equivalent to ten regular days.
Well, the whole point is that the revealed preferences from Kawoomba’s post above should easily overrule such considerations, and therefore checking whether they do or not should clarify whether he’s acting under extreme scope insensitivity or some other confounding factor.
Ah, my mistake.