No… Although I did see it could be read that way, so I added the disclaimer. I do admit that the disclaimer does not add much as there was no cost to me to write it. I’m sorry if I sounded that way.
(“Do you really have the opposite preference? You’d kill your family to avoid genocide? That seems atrociously evil. How do you morally justify that to yourself?”)
I will attempt show my thought processes on this the best I can. An answer like this is what my question was trying to get. Yes, I understand that drawing the line is fuzzy, but it can be good to get a somewhat deeper look.
Think of the people of the world. Think of all the things people go around doing in day to day life. The families, the enjoyment people get. I am sure that this is something you value. Of course, you might have a higher weighting of the moral value of this for certain groups rather than others, like perhaps your family. But to have a weighting that much higher on your family members would have certain implications. If you had a weighting high enough to make you commit genocide rather than have your family die, that weighting must be very high, more than a billion to one. (Of course this depends on the size of your family. If you consider half the planet your family, we are discussing something else entirely.)
Lets repeat that for emphasis. 1000000000:1 ratio. What does that actually mean? it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye. This is something very much at odds with the normal human perception of morality. That is, while it may be self consistent, it absolutely contradicts what we normally consider morality. This is a strong indicator (though not definite of course) that something fishy is going on with that argument.
(There are some more points to be said, but this post is long enough already. For example, why do I assume that you can scale things this way? In other words why is scope insensitivity bad? If you want to talk about that more I will, but that is not the point of my comment.)
So basically, what I was asking might be better be written this way: Given the vastly different moral point of view you get from such a system of ethics, how do you justify it? That is to say, you do need to be able to come up with some other factor explaining how your system does fit in with our moral intuitions, and I genuinely can not think of such an explanation.
it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye.
For five years of torture, I’d estimate that as 34 trillion times worse, assuming a perception takes about 100 msec and a human can register 20 logarithmic degrees of discomfort.
Thank you for FINALLY calculating that number. It’s very likely off by a few orders of magnitude due to the 20-logarithmic-degrees part (our hearing ranges more widely than this, I think) but at least you tried to bloody calculate it.
Here is a relevant paper which lets one estimate the number of bits sufficient to encode pain, by dividing the top firing rate by the baseline firing rate variability of a nociceptor and taking base 2 logarithm (the paper does not do it, but the data is there). My quick guess is that it’s at most a few bits (4 to 6), not 20, which is much less sensitive than hearing.
I didn’t suggest 20 bits; I suggested 20 distinguishable degrees of discomfort. Medical diagnosis sometimes uses ten, or is that six? which I thought was wrong at the low end — a dust speck is much less discomfort than anyone goes to the doctor for. 4 to 6 bits could encode 16 to 64 degrees of discomfort. I did presume that discomfort is logarithmic (since other senses are), and I conflated pain with irritation, which are not really subjectively the same.
If your point is that perceived pain is aggregated, you are right, of course. The above analysis is misguided, one should really look at the brain structures that make us perceive torture pain as a long-lasting unpleasant experience. A quick search suggests that the region of the brain primarily responsible for the unpleasantness of pain (as opposed to its perception) is the nociceptive area (area 24) of the Anterior cingulate cortex. I could not find, however, a reasonable way to calculate the dynamic range of the pain affect beyond the usual 10-level scale self-assessment.
It’s not obvious that disutility would scale linearly with amount of torture; would you be indifferent between a 100% chance of getting a dust speck in your eye and a 1 in 34 trillion chance of being tortured for five years?
(My intuition probably doesn’t work right with such small numbers, so I don’t know myself.)
Thanks for pointing that out. That comment that you linked to seems a valuable post in the discussion of torture verses dust specs. I just used torture versus dust specks in my comment for familiarity value. To consider the question more formally, of course, you need to find two things, one trivial and one major, that the ratio of badness is exactly 1 to a billion. The exact details do not exactly matter to my point, but you are right that the example I gave is not technically accurate.
If I’ve followed your thought process correctly, you justify your moral intuitions because they are shared by most other humans, and since Kawoomba’s intuitions aren’t so popular, they require some other justification.
Yes?
Fair enough; that answers my question. Thanks.
For my own part, I think that’s not much of a justification, but then I don’t think that justifying moral intuitions is a particularly valuable exercise. They are what they are. If my moral intuitions are shared by a more powerful and influential group than yours, then our society will reflect my moral intuitions and not yours. For me to then demand that you explain how your moral intuitions “fit in” with mine makes about as much sense as demanding that a Swahili speaker explain how their grammatical intuitions “fit in” with mine.
Indeed. You summarized my point far more effectively then I did. Thank you. I was a bit unclear about what I was saying. You are right about it not being much of a justification, but that is basically the only type of moral justification possible. But I get your point about it not being a very productive task to try to give moral justifications.
Doesn’t follow, you don’t need to grade linearly, i.e. you can consider avoiding corporeal or mental damage / anguish above a certain threshold exponentially more important than avoiding dust specks.
Think of an AI taking care of a nuclear power plant, consider it has a priority system: “Core temperature critical? If yes, always prioritize this. Else: Remote control cleaner bots to clean the facility. Else: (...)” Or a process throwing an exception which gets priority-handled.
No… Although I did see it could be read that way, so I added the disclaimer. I do admit that the disclaimer does not add much as there was no cost to me to write it. I’m sorry if I sounded that way.
I will attempt show my thought processes on this the best I can. An answer like this is what my question was trying to get. Yes, I understand that drawing the line is fuzzy, but it can be good to get a somewhat deeper look.
Think of the people of the world. Think of all the things people go around doing in day to day life. The families, the enjoyment people get. I am sure that this is something you value. Of course, you might have a higher weighting of the moral value of this for certain groups rather than others, like perhaps your family. But to have a weighting that much higher on your family members would have certain implications. If you had a weighting high enough to make you commit genocide rather than have your family die, that weighting must be very high, more than a billion to one. (Of course this depends on the size of your family. If you consider half the planet your family, we are discussing something else entirely.)
Lets repeat that for emphasis. 1000000000:1 ratio. What does that actually mean? it means that you would prefer rather than a minor inconvenience to a family member, you would prefer something a billion times worse happening to a non-family member. To use an often used example, you would rather have a stranger tortured for years rather than have a dust speck get in your family member’s eye. This is something very much at odds with the normal human perception of morality. That is, while it may be self consistent, it absolutely contradicts what we normally consider morality. This is a strong indicator (though not definite of course) that something fishy is going on with that argument.
(There are some more points to be said, but this post is long enough already. For example, why do I assume that you can scale things this way? In other words why is scope insensitivity bad? If you want to talk about that more I will, but that is not the point of my comment.)
So basically, what I was asking might be better be written this way: Given the vastly different moral point of view you get from such a system of ethics, how do you justify it? That is to say, you do need to be able to come up with some other factor explaining how your system does fit in with our moral intuitions, and I genuinely can not think of such an explanation.
For five years of torture, I’d estimate that as 34 trillion times worse, assuming a perception takes about 100 msec and a human can register 20 logarithmic degrees of discomfort.
Thank you for FINALLY calculating that number. It’s very likely off by a few orders of magnitude due to the 20-logarithmic-degrees part (our hearing ranges more widely than this, I think) but at least you tried to bloody calculate it.
Here is a relevant paper which lets one estimate the number of bits sufficient to encode pain, by dividing the top firing rate by the baseline firing rate variability of a nociceptor and taking base 2 logarithm (the paper does not do it, but the data is there). My quick guess is that it’s at most a few bits (4 to 6), not 20, which is much less sensitive than hearing.
I didn’t suggest 20 bits; I suggested 20 distinguishable degrees of discomfort. Medical diagnosis sometimes uses ten, or is that six? which I thought was wrong at the low end — a dust speck is much less discomfort than anyone goes to the doctor for. 4 to 6 bits could encode 16 to 64 degrees of discomfort. I did presume that discomfort is logarithmic (since other senses are), and I conflated pain with irritation, which are not really subjectively the same.
I suppose humans have more than one nociceptor each? ;-)
If your point is that perceived pain is aggregated, you are right, of course. The above analysis is misguided, one should really look at the brain structures that make us perceive torture pain as a long-lasting unpleasant experience. A quick search suggests that the region of the brain primarily responsible for the unpleasantness of pain (as opposed to its perception) is the nociceptive area (area 24) of the Anterior cingulate cortex. I could not find, however, a reasonable way to calculate the dynamic range of the pain affect beyond the usual 10-level scale self-assessment.
It’s not obvious that disutility would scale linearly with amount of torture; would you be indifferent between a 100% chance of getting a dust speck in your eye and a 1 in 34 trillion chance of being tortured for five years?
(My intuition probably doesn’t work right with such small numbers, so I don’t know myself.)
Thanks for pointing that out. That comment that you linked to seems a valuable post in the discussion of torture verses dust specs. I just used torture versus dust specks in my comment for familiarity value. To consider the question more formally, of course, you need to find two things, one trivial and one major, that the ratio of badness is exactly 1 to a billion. The exact details do not exactly matter to my point, but you are right that the example I gave is not technically accurate.
If I’ve followed your thought process correctly, you justify your moral intuitions because they are shared by most other humans, and since Kawoomba’s intuitions aren’t so popular, they require some other justification.
Yes?
Fair enough; that answers my question. Thanks.
For my own part, I think that’s not much of a justification, but then I don’t think that justifying moral intuitions is a particularly valuable exercise. They are what they are. If my moral intuitions are shared by a more powerful and influential group than yours, then our society will reflect my moral intuitions and not yours. For me to then demand that you explain how your moral intuitions “fit in” with mine makes about as much sense as demanding that a Swahili speaker explain how their grammatical intuitions “fit in” with mine.
Indeed. You summarized my point far more effectively then I did. Thank you. I was a bit unclear about what I was saying. You are right about it not being much of a justification, but that is basically the only type of moral justification possible. But I get your point about it not being a very productive task to try to give moral justifications.
Doesn’t follow, you don’t need to grade linearly, i.e. you can consider avoiding corporeal or mental damage / anguish above a certain threshold exponentially more important than avoiding dust specks.
Think of an AI taking care of a nuclear power plant, consider it has a priority system: “Core temperature critical? If yes, always prioritize this. Else: Remote control cleaner bots to clean the facility. Else: (...)” Or a process throwing an exception which gets priority-handled.