I think most people give way too small a multiplier to the weight of animal suffering. A non-human animal may not be able to suffer in all the same ways that a human can, but it is still sufficiently conscious such that its experiences in a factory farm are probably comparable to what a human’s experiences would be in the same situation.
What should be objective grounds for such a multiplier?
Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.
So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings—if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma—if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I’d favor the man in all cases without much hesitation.
So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes—one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I’ve got no idea. I’ve given it some thought but I can’t imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.
I wouldn’t try to estimate the value of a particular species’ suffering by intuition. Intuition is, in a lot of situations, a pretty bad moral compass. Instead, I would start from the simple assumption that if two beings suffer equally, their suffering is equally significant. I don’t know how to back up this claim other than this: if two beings experience some unpleasant feeling in exactly the same way, it is unfair to say that one of their experiences carries more moral weight than the other.
Then all we have to do is determine how much different beings suffer. We can’t know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain. Hence, the physical pain that a chicken feels is roughly comparable to the pain that a human feels. It should be possible to use neuroscience to provide a more precise comparison, but I don’t know enough about that to say more.
Top animal-welfare charities such as The Humane League probably prevent about 100 days of suffering per dollar. The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.
As a side note, you mentioned comparing the value of a cow versus a human. I don’t think this is a very useful comparison to make. A better comparison is the suffering of a cow versus a human. A life’s value depends on how much happiness and suffering it contains.
A life’s value depends on how much happiness and suffering it contains.
I personally treat lives as valuable in and of themselves. It’s why I don’t kill sad people, I try to make them happier.
The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.
Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.
EDIT:
Then all we have to do is determine how much different beings suffer. We can’t know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain.
Do you also support tiling the universe with orgasmium? Genuinely curious.
I personally treat lives as valuable in and of themselves.
Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?
It’s why I don’t kill sad people, I try to make them happier.
Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.
Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.
I don’t care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that’s where the evidence seems to point. What most people think about it does not come into the equation.
Do you also support tiling the universe with orgasmium?
Probably. I’m reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it’s a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world’s best thinkers to decide if it’s worth doing. But right now I’m inclined to say that it is.
[1] Here I’m talking about animals like pigs and chickens, not animals like sea sponges.
I personally treat lives as valuable in and of themselves.
Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?
I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.
It’s why I don’t kill sad people, I try to make them happier.
Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.
Oh, yes. Nevertheless, even if it would increase net happiness, I don’t kill people. Not for the sake of happiness alone and all that.
Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.
I don’t care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that’s where the evidence seems to point. What most people think about it does not come into the equation.
The same way, sure. But introspection suggests I don’t value it as much depending on how conscious they are (probably the same as intelligence.)
Do you also support tiling the universe with orgasmium?
Probably. I’m reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it’s a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world’s best thinkers to decide if it’s worth doing. But right now I’m inclined to say that it is.
I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.
I was asking questions to try to better understand where you’re coming from. Do you mean the questions were confusing?
Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?
But introspection suggests I don’t value it as much depending on how conscious they are (probably the same as intelligence.)
Why not? Do you have a good reason, or are you just going off of intuition?
Have you read “Not for the Sake of Happiness (Alone)”?
Yes, I’ve read it. I’m not entirely convinced that all values reduce to happiness, but I’ve never seen any value that can’t be reduced to happiness. That’s one of the areas in ethics where I’m the most uncertain. In practice, it doesn’t come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.
I’m inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I’m fairly uncertain about this and I don’t have much evidence.
Yes, I’ve read it. I’m not entirely convinced that all values reduce to happiness, but I’ve never seen any value that can’t be reduced to happiness. That’s one of the areas in ethics where I’m the most uncertain. In practice, it doesn’t come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.
You can probably think of a happiness-based justification for any value someone throws at you. But that’s probably only because you’re coming from the privileged position of being a human who already knows those values are good, and hence wants to find a reason happiness justifies them. I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.
It’s difficult for me to say because this sort of introspection is difficult, but I believe that I generally reject values when I find that they don’t promote happiness.
You can probably think of a happiness-based justification for any value someone throws at you.
But some justifications are legitimate and some are rationalizations. With the examples of discovery and creativity, I think it’s obvious that they increase happiness by a lot. It’s not like I came up with some ad hoc justification for why they maybe provide a little bit of happiness. It’s like discovery is responsible for almost all of the increases in quality of life that have taken place over the past several thousand years.
I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.
I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don’t.
With the examples of discovery and creativity, I think it’s obvious that they increase happiness by a lot.
The point is that’s not sufficient. Like saying “all good is complexity, because for example a mother’s love for her child is really complex”. Yes, it’s complex compared to some boring things like carving identical chair legs out of wood over and over for eternity, but compared to, say, tiling the universe with the digits of chaitin’s omega or something, it’s nothing. And tiling the universe with chaitin’s omega would be a very boring and stupid thing to do.
You need to show that the value in question is the best way of generating happiness. Not just that it results in more than the status quo. It has to generate more happiness, than, say, putting everyone on heroine forever. Because otherwise someone who really cared about happiness would just do that.
I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don’t.
And they other point is that values aren’t supposed to do a job. They’re meant to describe what job you would like done! If you care about something that doesn’t increase happiness, then self-modifying to lose that so as to make more happiness would be a mistake.
You need to show that the value in question is the best way of generating happiness.
You’re absolutely correct. Discovery may not always be the best way of generating happiness; and if it’s not, you should do something else.
And the other point is that values aren’t supposed to do a job.
Not all values are terminal values. Some people value coffee because it wakes them up; they don’t value coffee in itself. If they discover that coffee in fact doesn’t wake them up, they should stop valuing coffee.
With the examples of discovery and creativity, I think it’s obvious that they increase happiness by a lot.
The point is that’s not sufficient.
What is sufficient is demonstrating that if discovery does not promote happiness then it is not valuable. As I explained in my sorting sand example, discovery that does not in any way promote happiness is not worthwhile.
must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.
I was asking questions to try to better understand where you’re coming from. Do you mean the questions were confusing?
No, I mean I am unsure as to what my CEV would answer.
Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?
Because I’ll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.
But introspection suggests I don’t value it as much depending on how conscious they are (probably the same as intelligence.)
Why not? Do you have a good reason, or are you just going off of intuition?
… both?
Have you read “Not for the Sake of Happiness (Alone)”?
Yes, I’ve read it. I’m not entirely convinced that all values reduce to happiness, but I’ve never seen any value that can’t be reduced to happiness. That’s one of the areas in ethics where I’m the most uncertain. In practice, it doesn’t come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.
Fair enough. Unfortunately, the area of ethics where I’m the most uncertain is weighting creatures with different intelligence levels.
Thing like discovery and creativity seem like good examples of preferences that don’t reduce to happiness IIRC, although it’s been a while since I thought everything reduced to happiness so I don’t recall very well.
I’m inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I’m fairly uncertain about this and I don’t have much evidence.
Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?
Because I’ll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.
But why is intelligence important? I don’t see its connection to morality. I know it’s commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.
If intelligence is morally significant, then it’s not really that bad to torture a mentally handicapped person.
I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.
… both?
So then what is your good reason that’s not directly based on intuition?
Thing like discovery and creativity seem like good examples of preferences that don’t reduce to happiness IIRC, although it’s been a while since I thought everything reduced to happiness so I don’t recall very well.
Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn’t make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.
Not sure what this means.
You mentioned CEV in your previous comment, so I assume you’re familiar with it. I mean that I think if you took people’s coherent extrapolated volitions, they would exclusively value happiness.
I’ll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.
But why is intelligence important? I don’t see its connection to morality. I know it’s commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.
Well, why is pain important? I suspect empathy is mixed up here somewhere, but honestly, it doesn’t feel like it reduces—bugs just are worth less. Besides, where do you draw the line if you lack a sliding scale—I assume you don’t care about rocks, or sponges, or germs.
If intelligence is morally significant, then it’s not really that bad to torture a mentally handicapped person.
Well … not as bad as torturing, say, Bob, the Entirely Average Person, no. But it’s risky to distinguish between humans like this because it lets in all sorts of nasty biases, so I try not to except in exceptional cases.
I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.
I know you do. Of course, unless they’re really handicapped, most animals are still much lower; and, of course there’s the worry that the intelligence is ther and they just can’t express it in everyday life (idiot savants and so on.)
So then what is your good reason that’s not directly based on intuition?
Well, it’s morality, it does ultimately come down to intuition no matter what. I can come up with all sorts of reasons, but remember that they aren’t my true rejection—my true rejection is the mental image of killing a man to save some cockroaches.
Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn’t make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.
And yet, a world without them sounds bleak and lacking in utility.
You mentioned CEV in your previous comment, so I assume you’re familiar with it. I mean that I think if you took people’s coherent extrapolated volitions, they would exclusively value happiness
Oh, right.
Ah … not sure what I can say to convince you if NFTSOH(A) didn’t.
It’s really abstract and difficult to explain, so I probably won’t do a very good job. Peter Singer explains it pretty well in “All Animals Are Equal.” Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.
Besides, where do you draw the line if you lack a sliding scale—I assume you don’t care about rocks, or sponges, or germs.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
And yet, a world without [discovery] sounds bleak and lacking in utility.
Well yeah. That’s because discovery tends to increase happiness. But if it didn’t, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn’t benefit anyone, so what’s the point? Discovery is only worthwhile if it increases happiness in some way.
I’m not saying that it’s impossible to come up with an example of something that’s not reducible to happiness, but I don’t think discovery is such a thing.
[1] Unless it is capable of greater suffering, but that’s not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
This sounds like a bad rule and could potentially create a sensitivity arms race. Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started? This would be counter-intuitive, to say the least.
Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started?
It means that inducing some typically-harmful action on a Stoic is less harmful than inducing it on a normal person. For example, suppose you have a Stoic who no longer feels negative reactions to insults. If you insult her, she doesn’t mind at all. It would be morally better to insult this person than to insult a typical person.
Let me put it this way: all suffering of equal degree is equally important, and the importance of suffering is proportional to its degree.
A lot of conclusions follow from this principle, including:
animal suffering is important;
if you have to do something to one of two beings and it will cause greater suffering to being A, then, all else being equal, you should do it to being B.
It’s really abstract and difficult to explain, so I probably won’t do a very good job. Peter Singer explains it pretty well in “All Animals Are Equal.” Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.
No, my point was that your valuing pain is itself a moral intuition. Picture a pebblesorter explaining that this pile is correct, while your pile is, obviously, incorrect.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
So, say, an emotionless AI? A human with damaged pain receptors? An alien with entirely different neurochemistry analogs?
Well yeah. That’s because discovery tends to increase happiness. But if it didn’t, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn’t benefit anyone, so what’s the point? Discovery is only worthwhile if it increases happiness in some way.
No. I’m saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it’s better than sitting in a whitewashed room wireheading nonetheless.
[1] Unless it is capable of greater suffering, but that’s not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.
I’ve avoided using the word “suffering” or its synonyms in this comment, except in one instance where I believe it is appropriate.
No, my point was that your valuing pain is itself a moral intuition.
Yes, it’s an intuition. I can’t prove that suffering is important.
So, say, an emotionless AI?
If the AI does not consciously prefer any state to any other state, then it has no moral worth.
A human with damaged pain receptors?
Such a human could still experience emotions, so ey would still have moral worth.
An alien with entirely different neurochemistry analogs?
Difficult to say. If it can experience states about which it has an interest in promoting or avoiding, then it has moral worth.
No. I’m saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it’s better than sitting in a whitewashed room wireheading nonetheless.
Okay. I don’t really get why, but I can’t dispute that you hold that value. This is why preference utilitarianism can be nice.
You were defining pain/suffering/whatever as generic disutility? That’s much more reasonable.
… so, is a hive of bees one mind of many or sort of both at once? Does evolution get a vote, here? If you aren’t discounting optimizers that lack consciousness you’re gonna get some damn strange results with this.
Deep Blue cannot experience disutility (i.e. negative states). Deep Blue can have a utility function to determine the state of the chess board, but that’s not the same as consciously experiencing positive or negative utility.
Okay, I see what you mean by “experience”… but that makes “A non-conscious being cannot experience disutility” a tautology, so following it with “therefore” and a non-tautological claim raises all kind of warning lights in my brain.
Unless you can taboo “conscious” in such a way that that made sense, I’m gonna substitute “intelligent” for “conscious” there (which is clearly what I meant, in context.)
The point with bees is that, as a “hive mind”, they act as an optimizer without any individual intention.
I’m gonna substitute “intelligent” for “conscious” there
I don’t see that you can substitute “intelligent” for “conscious”. Perhaps they are correlated, but they’re certainly not the same. I’m definitely more intelligent than my dog, but am I more conscious? Probably not. My dog seems to experience the world just as vividly as I do. (Knowing this for certain requires solving the hard problem of consciousness, but that’s where the evidence seems to point.)
(which is clearly what I meant, in context.)
It’s clear to you because you wrote it, but it wasn’t clear to me.
Well yes, that’s the illusion of transparency for you. I assure you, I was using conscious as a synonym for intelligent. Were you interpreting it as “able to experience qualia”? Because that is both a tad tautological and noticeably different from the argument I’ve been making here.
Whatever. We’re getting offtopic.
If you value optimizer’s goals regardless of intelligence—whether valuing a bugs desires as much as a human’s, a hivemind’s goals less than it’s individual members or an evolution’s goals anywhere—you get results that do not appear to correlate with anything you could call human morality. If I have misinterpreted your beliefs, I would like to know how. If I have interpreted them correctly, I would like to see how you reconcile this with saving orphans by tipping over the ant farm.
If ants experience qualia at all, which is highly uncertain, they probably don’t experience them to the same extent that humans do. Therefore, their desires are not as important. On the issue of the moral relevance of insects, the general consensus among utilitarians seems to be that we have no idea how vividly insects can experience the world, if at all, so we are in no position to rate their moral worth; and we should invest more into research on insect qualia.
I think it’s pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won’t get into the question of killing because it’s massively more complicated.)
I would like to seehow you reconcile this with saving orphans by tipping over the ant farm.
I cannot say whether this is right or wrong because we don’t know enough about ant qualia, but I would guess that a single human’s experience is worth the experience of at least hundreds of ants, possibly a lot more.
you get results that do not appear to correlate with anything you could call human morality.
Like what, besides the orphans-ants thing? I don’t know if you’ve misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of “human morality” is horrendously incorrect.
I think it’s pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human.
This isn’t obvious to me. And it is especially not obvious given that dogs are a species where one of the primary selection effects has been human sympathy.
You make a good point about human sympathy. Still, if you look at biological and neurological evidence, it appears that dogs are built in pretty much the same ways we are. They have the same senses—in fact, their senses are stronger in some cases. They have the same evolutionary reasons to react to pain. The parts of their brains responsible for pain look the same as ours. The biggest difference is probably that we have cerebral cortexes and they don’t, but that part of the brain isn’t especially important in responding to physical pain. Other forms of pain, yes; and I would agree that humans can feel some negative states more strongly than dogs can. But it doesn’t look like physical pain is one of those states.
If ants experience qualia at all, which is highly uncertain, they probably don’t experience them to the same extent that humans do. Therefore, their desires are not as important.
GOSH REALLY.
I think it’s pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won’t get into the question of killing because it’s massively more complicated.)
Once again, you fail to provide the slightest justification for valuing dogs as much as humans; if this was “obvious” we wouldn’t be arguing, would we? Dogs are intelligent enough to be worth a non-negligable amount, but if we value all pain equally you should feel the same way about, say, mice, or … ants.
I would like to see how you reconcile this with saving orphans by tipping over the ant farm.
I cannot say whether this is right or wrong because we don’t know enough about ant qualia, but I would guess that a single human’s experience is worth the experience of at least hundreds of ants, possibly a lot more.
Like what, besides the orphans-ants thing? I don’t know if you’ve misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of “human morality” is horrendously incorrect.
How, exactly, can human morality be “incorrect”? What are you comparing it to?
if we value all pain equally you should feel the same way about, say, mice, or … ants.
Not if mice or ants don’t feel as much pain as humans do. Equal pain is equally valuable, no matter the species. But unequal pain is not equally valuable.
Huh? You value individualbees, yet not ants?
I worded my comment poorly. I didn’t mean to imply that bees are necessarily conscious. I’ve edited my comment to reflect this.
How, exactly, can human morality be “incorrect”? What are you comparing it to?
Well I’d have to get into metaethics to answer this, which I’m not very good at. I don’t think such a conversation would be fruitful.
GOSH REALLY.
Yes, really. You seemed to think that I believe ants were worth as much as humans, so I explained why I don’t believe that.
Firstly, I thought you said we were discussing disutility, not pain?
Secondly, could we taboo consciousness? It seems to mean all things to all people in discussions like this.
Thirdly, you claimed human morality was incorrect; I was under the impression that we were analyzing human morality. If you are working to a different standard to humanity (which I doubt) then perhaps a change in terminology is in order? If you are, in fact, a human, and as such the “morality” under discussion here is that of humans, then your statement makes no sense.
Assuming the second possibility, you’re right; there is no need to get into metaethics as long as we focus on actual (human) ethics.
Not if mice or ants don’t feel as much pain as humans do. Equal pain is equally valuable, no matter the species. But unequal pain is not equally valuable.
What conceivable test would verify if one organism feels more pain than another organism?
Good question. I don’t know of any such test, although I’m reluctant to say that it doesn’t exist. That’s why it’s important to do research in this area.
Some kind of brain scans? Probably not very useful on insects, etc, but would probably work for, say, chickens vs. chimpanzees.
Okay, say you had some kind of nociceptor analysis machine (or, for that matter, whatever you think “pain” will eventually reduce to). Would it count the number of discrete cociceptors or would it measure cociceptor mass? What if we encountered extra-terrestrial life that didn’t have any (of whatever it is that we have reduced “pain” to)? Would they then count for nothing in your moral calculus?
To me, this whole things feels like we are trying to multiply apples by oranges and divide by zebras. Also, it seems problematic from an institutional design perspective, due to poor incentive structure. It would reward those persons that self-modify towards being more utility-monster-like on the margin.
Well, there’s neurologically sophisticated Earthly life with neural organization very different from mammals’, come to that.
I’m not neurologist enough to give an informed account of how an octopus’s brain differs from a rhesus monkey’s, but I’m almost sure its version of nociception would look quite different. Though they’ve got an opioid receptor system, so maybe this is more basal than I thought.
I remember reading that crustaceans don’t have the part of the brain that processes pain. I don’t feel bad about throwing live crabs into boiling water.
If true, that is interesting. On the other hand, whether or not something feels pain seems like a much easier problem to solve than how much pain something feels relative to something else.
No, I’m not arguing that this is a bias to overcome—if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.
I’m arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating “value of entity_X’s suffering to me”. They are clearly not equal, they differ by order(s) of magnitude.
“general value of entity_X’s suffering” is a different, not identical measurement—but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don’t want to use the general values, but the values as they apply to me.
Regarding ” if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater”—I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?
Consider: if you attach higher utility to your child’s life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.
if you attach higher utility to your child’s life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.
Not true as a general statement, not if you’re maximizing your expected utility gain.
Also, “if”? One often attaches utility based on … attachment. Do you think there’s more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents “evil” in that regard?
In the same way that I’m “biased” towards yogurt-flavored ice-cream. You can call any preference you have a “bias”, but since we’re here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.
What’s your basis for objecting against utility functions that are “biased” (you introduced the term “evil”) in the sense of favoring your own children over random other children?
No, I’m claiming that parents don’t actually have a special case in their utility function, they’re just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.
It seems like a possibility, but I don’t think it’s possible to clearly know that it’s the case, and so it’s an error to “claim” that it’s the case (“claiming” sounds like an assertion of high degree of certainty). (You do say that it’s a “reasonable hypothesis”, but then what do you mean by “claiming”?)
Clear preferences that are not part of their utility function? And which supposedly are evil, or “biased”, with the negative connotations of “bias” included?
What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?
Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents’ utility function?
Sorry about the incredulity, but that’s the strangest apparently honestly held opinion I’ve read on LW in a long time. I’m probably misunderstanding your position somehow.
Even if you’re restricting your assertion to special cases, let’s go with that.
Why should I overcome my “bias” and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
What makes that an “evil” bias, as opposed to an ubiquitous aspect of most parents’ utility functions?
Why should I overcome my “bias” and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it’s just a “shut up and multiply” kind of thing...
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility
This assumption is excluded by Kawoomba’s “but which I do not care about as much”, so isn’t directly relevant at this point (unless you are making a distinction between “caring” and “utility”, which should be more explicit).
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function.
Taboo “utility function”, and “Kawoomba cares about Kawoomba’s utility function” would resolve into the tautologous “Kawoomba is motivated by whatever it is that motivates Kawoomba”. The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded. To the extent “utility function” refers to idealized extrapolated volition, rather than present desires, people won’t already have good understanding of even their own “utility function”.
The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded.
There is no idealized extrapolated volition that is based on my current volition that would prefer someone else’s child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.
If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)
Also, you used my name with a frequency of 7⁄84 in your last comment <3.
that does not mean that every statement I make about my own utility function must be suspect
In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that’s understood particularly well.
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility
If you’ve found a way to aggregate utility across persons, I’d like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor’s child, that is reflected in her utility function. What other standard are you trying to invoke?
What reason do you have for aiming to satisfy you own utility function
Um, it’s my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating “other preferences” that can overrule my utility function would be a contradiction in terms.
The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a “bias” and as part of your utility function, and who introduced the whole “evil” thing.
The nearest I can come to making sense of your claim is that it’s some sort of imaginary Prisoner’s Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children’s lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it’s still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you’re arguably not optimizing for humans anymore.
My assertion is that all humans share utility—which is the standard assumption in ethics, and seems obviously true—and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.
Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.
It makes no sense to say “my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise”, it’s a contradictio in terminis.
We should be very careful with ethical assumptions that seem “obviously true”. Especially when they are not (true as in “common”, it wouldn’t make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are “wrong”. Your proposed “obviously true” ethical assumption is also based on “evopsych”. You’re trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.
There is no epistemological truth in terminal values.
parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are “wrong”.
No.
Humans regularly act against their own ethics, whether due to misinformation or bias, akrasia, or cached thoughts about morality.
… are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people? Perhaps I have misunderstood your point.
It seems obviously true that the moralities people implement are often internally inconsistent. It also seems obviously true that people can talk about imperatives they feel derive from one horn or the other of an inconsistent moral system, without either lying or being wrong as such.
The inconsistency might resolve itself with new information, but it’s going to inform any statements we make about the moral system it exists in until that information arrives.
I would advise you to read “cached thoughts” and then answer my question:
… are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people?
… are you seriously suggesting that, say, racists, are right about what they want?
I am saying that the statement “a racist wants that which he/she wants” is tautologically true. There is no objective “right” or “wrong” when comparing utility functions, there is just “this utility function values X and Y, this other utility function values X and Z, they are compatible in respect to X, they are incompatible in respect to Y”.
Certainly what we value changes all the time. But that’s just change, it’s not becoming “less wrong” or “wronger”. Instead, it may be “more (/less) compatible with commonly shared elements of western utility functions” (which still fluctuate across time and culture, and species).
Except that humans share a utility function, which doesn’t change. You can persuade someone that murder is good, but you do it by persuading them that it leads to outcomes they already considered “good” and they were mistaken about the downsides of, well, killing people. Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say “this is wrong” just was we can say “this is a chair” or “there are five apples”. Eliezer’s latest sequence touches on this kind of meaningfulness. Other standard stuff worth reading in this context is “The Psychological Unity of Humankind” and “Coherent Extrapolated Volition”; and, well, the Metaethics Sequence.
Except that humans share a utility function, which doesn’t change.
Humans trivially don’t share a utility function, since they have differing preferences over world-states. I’m even pretty sure that individual people don’t have anything that we could call a reliable utility function, since we don’t have the cognitive juice to evaluate world-states in their totality and even tractable subsets of the world end up getting evaluated differently based on all sorts of random crap including, but not limited to, presentation order and how recently you’ve eaten.
CEV attempts to resolve people’s conflicting preferences by doing away with several human cognitive limitations, requiring reflective consistency, and applying resolution steps based on projected social interactions (at least, that’s how I’m reading “grew up farther together”), but these requirements (especially the latter) are underspecified in its present form. Even if they weren’t, CEV in its present form does not, nor does it try to, demonstrate that the entirety of the human moral landscape in fact coheres.
Humans trivially don’t share a utility function, since they have differing preferences over world-states.
Humans trivially do share a utility function, since they change their beliefs consistently in response to argument. Of course, as with all other knowledge, self-knowledge and moral reasoning are hampered by biases, cached thoughts, and simple stupidity.
CEV, and for that matter The Psychological Unity of Humankind, are relevant without being themselves arguments. Have you, in fact, read the metaethics sequence? I ask for information as to how best to proceed.
Humans trivially do share a utility function, since they change their beliefs consistently in response to argument.
...no offense, but I don’t think that word means what you think it means.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) Even if true, this doesn’t imply a shared utility function within any useful domain.
Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, a utility function is one that maps those states to their preference ordering for a given agent, and in between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does is a question that more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) If true, this does not demonstrate a shared utility function within some domain. Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, and between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
...huh?
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt. Just like, say, mathematics, or predicting the movement of planets.
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”.
Now we’re getting somewhere.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
That being said, I’m afraid the idea that there’s some uniform set of impulses on which all existing moralities are fundamentally based is more an article of faith than a statement of fact given the present state of knowledge. There’s clearly enough unity there for some moral concepts to (e.g.) be describable in language, but that’s a relatively weak criterion. Pathology gives the idea of strong consistency a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function covering all normal people; just off the top of my head, for example, it could easily be that parts of it sum as a polynomial, or something similar, for which the coefficients vary somewhat between people or populations.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Pathology gives the idea a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function describing the foundational moral sentiments of all normal people.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment.
I note that humans can empathise with pains they do not themselves feel.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
Another situation that has some parallels and may be relevant to the discussion.
Helping starving kids is Good—that’s well understood.
However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.
It may be caused by the fact that I’m partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I’d guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.
That is so—though it depends on the actual chances; “much higher chance of survival” is different than “higher chance of survival”.
But my point is that:
a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode—I would have higher chances of survival if those close to me prefer me to a random stranger. And “belief-sets that we want our neighbors to have” are correlated with what we define as “good”.
b) As far as I understand, homo sapiens do generally actually have such an attitude—evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.
c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I’d be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids—even if I wouldn’t love them, I’d still have that duty.
My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.
I think most people give way too small a multiplier to the weight of animal suffering. A non-human animal may not be able to suffer in all the same ways that a human can, but it is still sufficiently conscious such that its experiences in a factory farm are probably comparable to what a human’s experiences would be in the same situation.
What should be objective grounds for such a multiplier? Not all suffering is valued equally. Excluding self-suffering (which is so much subjectively different) from the discussion, I would value the suffering of my child as more important than the suffering of your child. And vice versa.
So, for any valuation that would make sense to me (so that I would actually use that method to make decisions), there should be some difference between multipliers for various beings—if the average homo sapiens would be evaluated with a coefficient of 1, then some people (like your close relatives or friends) would be >1, and some would be <1. Animals (to me) would clearly be <1 as illustrated by a simple dilemma—if I had to choose to kill a cow to save a random man, or to kill a random man to save a cow, I’d favor the man in all cases without much hesitation.
So an important question is, what should be a reasonable basis to quantitatively compare a human life versus (as an example) cow lifes—one-to-ten? one-to-thousand? one-to-all-the-cows-in-the-world? Frankly, I’ve got no idea. I’ve given it some thought but I can’t imagine a way how to get to an order of magnitude estimate that would feel reasonable to me.
I wouldn’t try to estimate the value of a particular species’ suffering by intuition. Intuition is, in a lot of situations, a pretty bad moral compass. Instead, I would start from the simple assumption that if two beings suffer equally, their suffering is equally significant. I don’t know how to back up this claim other than this: if two beings experience some unpleasant feeling in exactly the same way, it is unfair to say that one of their experiences carries more moral weight than the other.
Then all we have to do is determine how much different beings suffer. We can’t know this for certain until we solve the hard problem of consciousness, but we can make some reasonable assumptions. A lot of people assume that a chicken feels less physical pain than a human because it is stupider. But neurologically speaking, there does not appear to be any reason why intelligence would enhance the capacity to feel pain. Hence, the physical pain that a chicken feels is roughly comparable to the pain that a human feels. It should be possible to use neuroscience to provide a more precise comparison, but I don’t know enough about that to say more.
Top animal-welfare charities such as The Humane League probably prevent about 100 days of suffering per dollar. The suffering that animals experience in factory farms is probably far worse (by an order of magnitude or more) than the suffering of any group of humans that is targeted by a charity. If you doubt this claim, watch some footage of what goes on in factory farms.
As a side note, you mentioned comparing the value of a cow versus a human. I don’t think this is a very useful comparison to make. A better comparison is the suffering of a cow versus a human. A life’s value depends on how much happiness and suffering it contains.
I personally treat lives as valuable in and of themselves. It’s why I don’t kill sad people, I try to make them happier.
Most people would argue that animals are less capable of experiencing suffering and thus the same amount of pain is worth less in an animal than a human.
EDIT:
Do you also support tiling the universe with orgasmium? Genuinely curious.
Why? What sort of life has value? Does the life of a bacterium have inherent value? How about a chicken? Does a life have finite inherent value? How do you compare the inherent value of different lives?
Killing people makes them have 0 happiness (in practice, it actually reduces the total happiness in the world by quite a bit because killing someone has a lot of side effects.) Making people happy gives them positive happiness. Positive happiness is better than 0 happiness.
I don’t care what most people think. The majority is wrong about a lot of things. I believe that non-human animals [1] experience pain in roughly the same way that humans do because that’s where the evidence seems to point. What most people think about it does not come into the equation.
Probably. I’m reluctant to make a change of that magnitude without considering it really, really carefully, no matter how sure I may be right now that it’s a good thing. If I found myself with the capacity to do this, I would probably recruit an army of the world’s best thinkers to decide if it’s worth doing. But right now I’m inclined to say that it is.
[1] Here I’m talking about animals like pigs and chickens, not animals like sea sponges.
I must admit I am a tad confused here, but intelligence or whatever seems a good rule of thumb.
Oh, yes. Nevertheless, even if it would increase net happiness, I don’t kill people. Not for the sake of happiness alone and all that.
The same way, sure. But introspection suggests I don’t value it as much depending on how conscious they are (probably the same as intelligence.)
Have you read “Not for the Sake of Happiness (Alone)”? Human values are complicated.
I was asking questions to try to better understand where you’re coming from. Do you mean the questions were confusing?
Are you saying that moral worth is directly proportional to intelligence? If so, why do you think this is true?
Why not? Do you have a good reason, or are you just going off of intuition?
Yes, I’ve read it. I’m not entirely convinced that all values reduce to happiness, but I’ve never seen any value that can’t be reduced to happiness. That’s one of the areas in ethics where I’m the most uncertain. In practice, it doesn’t come up much because in almost every situation, happiness and preference satisfaction amount to the same thing.
I’m inclined to believe that not all preferences reduce to happiness, but all CEV preferences do reduce to happiness. As I said before, I’m fairly uncertain about this and I don’t have much evidence.
You can probably think of a happiness-based justification for any value someone throws at you. But that’s probably only because you’re coming from the privileged position of being a human who already knows those values are good, and hence wants to find a reason happiness justifies them. I suspect an AI designed only to maximise happiness would probably find a different way that would produce more happiness while disregarding almost all values we think we have.
It’s difficult for me to say because this sort of introspection is difficult, but I believe that I generally reject values when I find that they don’t promote happiness.
But some justifications are legitimate and some are rationalizations. With the examples of discovery and creativity, I think it’s obvious that they increase happiness by a lot. It’s not like I came up with some ad hoc justification for why they maybe provide a little bit of happiness. It’s like discovery is responsible for almost all of the increases in quality of life that have taken place over the past several thousand years.
I think a lot of our values do a very good job of increasing happiness, and I welcome an AI that can point out which values don’t.
The point is that’s not sufficient. Like saying “all good is complexity, because for example a mother’s love for her child is really complex”. Yes, it’s complex compared to some boring things like carving identical chair legs out of wood over and over for eternity, but compared to, say, tiling the universe with the digits of chaitin’s omega or something, it’s nothing. And tiling the universe with chaitin’s omega would be a very boring and stupid thing to do.
You need to show that the value in question is the best way of generating happiness. Not just that it results in more than the status quo. It has to generate more happiness, than, say, putting everyone on heroine forever. Because otherwise someone who really cared about happiness would just do that.
And they other point is that values aren’t supposed to do a job. They’re meant to describe what job you would like done! If you care about something that doesn’t increase happiness, then self-modifying to lose that so as to make more happiness would be a mistake.
You’re absolutely correct. Discovery may not always be the best way of generating happiness; and if it’s not, you should do something else.
Not all values are terminal values. Some people value coffee because it wakes them up; they don’t value coffee in itself. If they discover that coffee in fact doesn’t wake them up, they should stop valuing coffee.
What is sufficient is demonstrating that if discovery does not promote happiness then it is not valuable. As I explained in my sorting sand example, discovery that does not in any way promote happiness is not worthwhile.
Well, orgasmium, for a start.
No, I mean I am unsure as to what my CEV would answer.
Because I’ll kill a bug to save a chicken, a chicken to save a cat, a cat to save an ape, and an ape to save a human. The part of me responsible for morality clearly has some sort of criteria for moral worth that seems roughly equivalent to intelligence.
… both?
Fair enough. Unfortunately, the area of ethics where I’m the most uncertain is weighting creatures with different intelligence levels.
Thing like discovery and creativity seem like good examples of preferences that don’t reduce to happiness IIRC, although it’s been a while since I thought everything reduced to happiness so I don’t recall very well.
Not sure what this means.
But why is intelligence important? I don’t see its connection to morality. I know it’s commonly believed that intelligence is morally relevant, and my best guess as to why is that it conveniently places humans at the top and thus justifies mistreating non-human animals.
If intelligence is morally significant, then it’s not really that bad to torture a mentally handicapped person.
I believe this is false: a mentally handicapped person suffers physical pain to the same extent that I do, so his suffering is just as morally significant. The same reasoning applies to many species of non-human animal. What matters is not intelligence but the capacity to experience happiness and suffering.
So then what is your good reason that’s not directly based on intuition?
Discovery leads to the invention of new things. In general, new things lead to increased happiness. It also leads to a better understanding of the universe, which allows us to better increase happiness. If the process of discovery brought no pleasure in itself and also didn’t make it easier for us to increase happiness, I think it would be useless. The same reasoning applies to creativity.
You mentioned CEV in your previous comment, so I assume you’re familiar with it. I mean that I think if you took people’s coherent extrapolated volitions, they would exclusively value happiness.
Well, why is pain important? I suspect empathy is mixed up here somewhere, but honestly, it doesn’t feel like it reduces—bugs just are worth less. Besides, where do you draw the line if you lack a sliding scale—I assume you don’t care about rocks, or sponges, or germs.
Well … not as bad as torturing, say, Bob, the Entirely Average Person, no. But it’s risky to distinguish between humans like this because it lets in all sorts of nasty biases, so I try not to except in exceptional cases.
I know you do. Of course, unless they’re really handicapped, most animals are still much lower; and, of course there’s the worry that the intelligence is ther and they just can’t express it in everyday life (idiot savants and so on.)
Well, it’s morality, it does ultimately come down to intuition no matter what. I can come up with all sorts of reasons, but remember that they aren’t my true rejection—my true rejection is the mental image of killing a man to save some cockroaches.
And yet, a world without them sounds bleak and lacking in utility.
Oh, right.
Ah … not sure what I can say to convince you if NFTSOH(A) didn’t.
It’s really abstract and difficult to explain, so I probably won’t do a very good job. Peter Singer explains it pretty well in “All Animals Are Equal.” Basically, we should give equal consideration to the interests of all beings. Any being capable of suffering has an interest in avoiding suffering. A more intelligent being does not have a greater interest in avoiding suffering [1]; hence, intelligence is not morally relevant.
There is a sliding scale. More capacity to feel happiness and suffering = more moral worth. Rocks, sponges, and germs have no capacity to feel happiness and suffering.
Well yeah. That’s because discovery tends to increase happiness. But if it didn’t, it would be pointless. For example, suppose you are tasked with sifting through a pile of sand to find which one is the whitest. When you finish, you will have discovered something new. But the process is really boring and it doesn’t benefit anyone, so what’s the point? Discovery is only worthwhile if it increases happiness in some way.
I’m not saying that it’s impossible to come up with an example of something that’s not reducible to happiness, but I don’t think discovery is such a thing.
[1] Unless it is capable of greater suffering, but that’s not a trait inherent to intelligence. I think it may be true in some respects that more intelligent beings are capable of greater suffering; but what matters is the capacity to suffer, not the intelligence itself.
This sounds like a bad rule and could potentially create a sensitivity arms race. Assuming that people that practice Stoic or Buddhist techniques are successful in diminishing their capacity to suffer, does that mean they are worth less morally than before they started? This would be counter-intuitive, to say the least.
It means that inducing some typically-harmful action on a Stoic is less harmful than inducing it on a normal person. For example, suppose you have a Stoic who no longer feels negative reactions to insults. If you insult her, she doesn’t mind at all. It would be morally better to insult this person than to insult a typical person.
Let me put it this way: all suffering of equal degree is equally important, and the importance of suffering is proportional to its degree.
A lot of conclusions follow from this principle, including:
animal suffering is important;
if you have to do something to one of two beings and it will cause greater suffering to being A, then, all else being equal, you should do it to being B.
No, my point was that your valuing pain is itself a moral intuition. Picture a pebblesorter explaining that this pile is correct, while your pile is, obviously, incorrect.
So, say, an emotionless AI? A human with damaged pain receptors? An alien with entirely different neurochemistry analogs?
No. I’m saying that I value exporation/discovery/whatever even when it serves no purpose, ultimately. Joe may be exploring a randomly-generated landscape, but it’s better than sitting in a whitewashed room wireheading nonetheless.
Can you taboo “suffering” for me?
I’ve avoided using the word “suffering” or its synonyms in this comment, except in one instance where I believe it is appropriate.
Yes, it’s an intuition. I can’t prove that suffering is important.
If the AI does not consciously prefer any state to any other state, then it has no moral worth.
Such a human could still experience emotions, so ey would still have moral worth.
Difficult to say. If it can experience states about which it has an interest in promoting or avoiding, then it has moral worth.
Okay. I don’t really get why, but I can’t dispute that you hold that value. This is why preference utilitarianism can be nice.
… oh.
You were defining pain/suffering/whatever as generic disutility? That’s much more reasonable.
… so, is a hive of bees one mind of many or sort of both at once? Does evolution get a vote, here? If you aren’t discounting optimizers that lack consciousness you’re gonna get some damn strange results with this.
Many. The unit of moral significance is the conscious mind. A group of bees is not conscious; individual bees are conscious.
(Edit: It’s possible that bees are not conscious. What I meant was that if bees are conscious then they are conscious as individuals, not as a group.)
A non-conscious being cannot experience disutility, therefore it has no moral relevance.
Er… Deep Blue?
Deep Blue cannot experience disutility (i.e. negative states). Deep Blue can have a utility function to determine the state of the chess board, but that’s not the same as consciously experiencing positive or negative utility.
Okay, I see what you mean by “experience”… but that makes “A non-conscious being cannot experience disutility” a tautology, so following it with “therefore” and a non-tautological claim raises all kind of warning lights in my brain.
Unless you can taboo “conscious” in such a way that that made sense, I’m gonna substitute “intelligent” for “conscious” there (which is clearly what I meant, in context.)
The point with bees is that, as a “hive mind”, they act as an optimizer without any individual intention.
I don’t see that you can substitute “intelligent” for “conscious”. Perhaps they are correlated, but they’re certainly not the same. I’m definitely more intelligent than my dog, but am I more conscious? Probably not. My dog seems to experience the world just as vividly as I do. (Knowing this for certain requires solving the hard problem of consciousness, but that’s where the evidence seems to point.)
It’s clear to you because you wrote it, but it wasn’t clear to me.
Well yes, that’s the illusion of transparency for you. I assure you, I was using conscious as a synonym for intelligent. Were you interpreting it as “able to experience qualia”? Because that is both a tad tautological and noticeably different from the argument I’ve been making here.
Whatever. We’re getting offtopic.
If you value optimizer’s goals regardless of intelligence—whether valuing a bugs desires as much as a human’s, a hivemind’s goals less than it’s individual members or an evolution’s goals anywhere—you get results that do not appear to correlate with anything you could call human morality. If I have misinterpreted your beliefs, I would like to know how. If I have interpreted them correctly, I would like to see how you reconcile this with saving orphans by tipping over the ant farm.
If ants experience qualia at all, which is highly uncertain, they probably don’t experience them to the same extent that humans do. Therefore, their desires are not as important. On the issue of the moral relevance of insects, the general consensus among utilitarians seems to be that we have no idea how vividly insects can experience the world, if at all, so we are in no position to rate their moral worth; and we should invest more into research on insect qualia.
I think it’s pretty obvious that (e.g.) dogs experience the world about as vividly as humans do, so all else being equal, kicking a dog is about as bad as kicking a human. (I won’t get into the question of killing because it’s massively more complicated.)
I cannot say whether this is right or wrong because we don’t know enough about ant qualia, but I would guess that a single human’s experience is worth the experience of at least hundreds of ants, possibly a lot more.
Like what, besides the orphans-ants thing? I don’t know if you’ve misinterpreted my beliefs unless I have a better idea of what you think I believe. That said, I do believe that a lot of “human morality” is horrendously incorrect.
This isn’t obvious to me. And it is especially not obvious given that dogs are a species where one of the primary selection effects has been human sympathy.
You make a good point about human sympathy. Still, if you look at biological and neurological evidence, it appears that dogs are built in pretty much the same ways we are. They have the same senses—in fact, their senses are stronger in some cases. They have the same evolutionary reasons to react to pain. The parts of their brains responsible for pain look the same as ours. The biggest difference is probably that we have cerebral cortexes and they don’t, but that part of the brain isn’t especially important in responding to physical pain. Other forms of pain, yes; and I would agree that humans can feel some negative states more strongly than dogs can. But it doesn’t look like physical pain is one of those states.
GOSH REALLY.
Once again, you fail to provide the slightest justification for valuing dogs as much as humans; if this was “obvious” we wouldn’t be arguing, would we? Dogs are intelligent enough to be worth a non-negligable amount, but if we value all pain equally you should feel the same way about, say, mice, or … ants.
Huh? You value individual bees, yet not ants?
How, exactly, can human morality be “incorrect”? What are you comparing it to?
See my reply here.
Not if mice or ants don’t feel as much pain as humans do. Equal pain is equally valuable, no matter the species. But unequal pain is not equally valuable.
I worded my comment poorly. I didn’t mean to imply that bees are necessarily conscious. I’ve edited my comment to reflect this.
Well I’d have to get into metaethics to answer this, which I’m not very good at. I don’t think such a conversation would be fruitful.
Yes, really. You seemed to think that I believe ants were worth as much as humans, so I explained why I don’t believe that.
Firstly, I thought you said we were discussing disutility, not pain?
Secondly, could we taboo consciousness? It seems to mean all things to all people in discussions like this.
Thirdly, you claimed human morality was incorrect; I was under the impression that we were analyzing human morality. If you are working to a different standard to humanity (which I doubt) then perhaps a change in terminology is in order? If you are, in fact, a human, and as such the “morality” under discussion here is that of humans, then your statement makes no sense.
Assuming the second possibility, you’re right; there is no need to get into metaethics as long as we focus on actual (human) ethics.
What conceivable test would verify if one organism feels more pain than another organism?
Good question. I don’t know of any such test, although I’m reluctant to say that it doesn’t exist. That’s why it’s important to do research in this area.
Some kind of brain scans? Probably not very useful on insects, etc, but would probably work for, say, chickens vs. chimpanzees.
Okay, say you had some kind of nociceptor analysis machine (or, for that matter, whatever you think “pain” will eventually reduce to). Would it count the number of discrete cociceptors or would it measure cociceptor mass? What if we encountered extra-terrestrial life that didn’t have any (of whatever it is that we have reduced “pain” to)? Would they then count for nothing in your moral calculus?
To me, this whole things feels like we are trying to multiply apples by oranges and divide by zebras. Also, it seems problematic from an institutional design perspective, due to poor incentive structure. It would reward those persons that self-modify towards being more utility-monster-like on the margin.
Well, there’s neurologically sophisticated Earthly life with neural organization very different from mammals’, come to that.
I’m not neurologist enough to give an informed account of how an octopus’s brain differs from a rhesus monkey’s, but I’m almost sure its version of nociception would look quite different. Though they’ve got an opioid receptor system, so maybe this is more basal than I thought.
I remember reading that crustaceans don’t have the part of the brain that processes pain. I don’t feel bad about throwing live crabs into boiling water.
Really? I remember reading the opposite. Many times. If you’re regularly boiling them alive, have you considered researching this?
I’m not regularly boiling them alive, but I researched it a little anyway. Here’s a study often used to show that crustaceans DO feel pain: http://forms.mbl.edu/research/services/iacuc/pdf/pain_hermit_crabs.pdf
If true, that is interesting. On the other hand, whether or not something feels pain seems like a much easier problem to solve than how much pain something feels relative to something else.
To be clear, you are arguing that this is a bias to be overcome, yes?
Scope insensitivity?
No, I’m not arguing that this is a bias to overcome—if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater.
I’m arguing that this is a strong counterexample to the assumption that all entities may be treated as equals in calculating “value of entity_X’s suffering to me”. They are clearly not equal, they differ by order(s) of magnitude.
“general value of entity_X’s suffering” is a different, not identical measurement—but when making my decisions (such as the original discussion on what charities would be the most rational [for me] to support) I don’t want to use the general values, but the values as they apply to me.
… oh.
That seems … kind of evil, to be honest.
OK, then I feel confused.
Regarding ” if I have to choose wether to save my child or your child, the unbiased rational choice is to save my child, as the utility (to me) of this action is far greater”—I was under impression that this would be a common trait shared by [nearly] all homo sapiens. Is it not so and is generally considered sociopathic/evil ?
Consider: if you attach higher utility to your child’s life than mine, then even if my child has a higher chance of survival you will choose your child and leave mine to die.
Not true as a general statement, not if you’re maximizing your expected utility gain.
Also, “if”? One often attaches utility based on … attachment. Do you think there’s more than, say, 0.01 parents per 100 that would not value their own child over some other child? Are most all parents “evil” in that regard?
I believe the technical term is “biased”.
In the same way that I’m “biased” towards yogurt-flavored ice-cream. You can call any preference you have a “bias”, but since we’re here mostly dealing with cognitive biases (a different beast altogether), such an overloading of a preference-expression with a negatively connotated failure-mode should really be avoided.
What’s your basis for objecting against utility functions that are “biased” (you introduced the term “evil”) in the sense of favoring your own children over random other children?
No, I’m claiming that parents don’t actually have a special case in their utility function, they’re just biased towards their kids. Since parents are known to be biased toward their kids generally, and human morality is generally consistent between individuals, this seems a reasonable hypothesis.
It seems like a possibility, but I don’t think it’s possible to clearly know that it’s the case, and so it’s an error to “claim” that it’s the case (“claiming” sounds like an assertion of high degree of certainty). (You do say that it’s a “reasonable hypothesis”, but then what do you mean by “claiming”?)
Up until this point, I had never seen any evidence to the contrary. I’m still kinda puzzled at the amount of disagreement I’m getting …
Clear preferences that are not part of their utility function? And which supposedly are evil, or “biased”, with the negative connotations of “bias” included?
What about valuing specific friends, is that also not part of the utility function, or does that just apply to parents and their kids?
Are you serious that valuing your own kids over other kids is a bias to be overcome, and not typically a part of the parents’ utility function?
Sorry about the incredulity, but that’s the strangest apparently honestly held opinion I’ve read on LW in a long time. I’m probably misunderstanding your position somehow.
In a triage situation? Yes.
Even if you’re restricting your assertion to special cases, let’s go with that.
Why should I overcome my “bias” and not save my own child, just because there is some other child with a better chance of being saved, but which I do not care about as much?
What makes that an “evil” bias, as opposed to an ubiquitous aspect of most parents’ utility functions?
Assuming that saving my child would give me X utility and saving the other child would give his parents X utility, it’s just a “shut up and multiply” kind of thing...
This assumption is excluded by Kawoomba’s “but which I do not care about as much”, so isn’t directly relevant at this point (unless you are making a distinction between “caring” and “utility”, which should be more explicit).
I guess I’m just not sure why Kawoomba’s own utility gets special treatment over the other child’s parents utility function. Then again, your reply and my own sentence just now have me slightly confused, so I may need to think on this a bit more.
Taboo “utility function”, and “Kawoomba cares about Kawoomba’s utility function” would resolve into the tautologous “Kawoomba is motivated by whatever it is that motivates Kawoomba”. The subtler problem is that it’s not a given that Kawoomba knows what motivates Kawoomba, so claims with certainty about what that is or isn’t (including those made by Kawoomba) may be unfounded. To the extent “utility function” refers to idealized extrapolated volition, rather than present desires, people won’t already have good understanding of even their own “utility function”.
There is no idealized extrapolated volition that is based on my current volition that would prefer someone else’s child over one of my own (CEV_me, not CEV_mankind). There are certainly inconsistencies in my non-idealized utility function, but that does not mean that every statement I make about my own utility function must be suspect, merely that such suspect/contradictory statements exist.
If you prefer vanilla over strawberry ice cream, there may be cases where that preference does not transfer to your extrapolated volition due to some other contradictory preferences. However, for comparisons with a significant delta involved, the initial result that determines your decision should be preserved. (It may however be different when extrapolating to a CEV for all humankind.)
Also, you used my name with a frequency of 7⁄84 in your last comment <3.
In general, unless something is well-understood, there is good reason to suspect an error. Human values is not something that’s understood particularly well.
If you value e.g. your family extremely higher than a grain of salt, would you say that there is any chance of that not being reflected in your CEV?
Any “CEV” that doesn’t conserve e.g. that particular relationship would be misnamed.
If you’ve found a way to aggregate utility across persons, I’d like to hear it.
Normally, we talk about trying to satisfy a particular utility function. If the parent values her child more than the neighbor’s child, that is reflected in her utility function. What other standard are you trying to invoke?
Ah, this clears up things a bit for me, thank you.
Why would I need to aim to satisfy overall utility including others, as opposed to just that of my own family?
Is any such preference that chooses my own utility over that of others a bias, and not part of my utility function?
Is it an evil bias if I buy myself some tech toys as opposed to donating that amount to my preferred charity?
What reason do you have for aiming to satisfy you own utility function, or that of your family’s?
I’m afraid this is a little too much lingo for me. Sorry.
You’d have to taboo “evil” before I can answer this question.
Um, it’s my utility function, that which I aim to maximize and that which already incorporates my e.g. altruistic desires. Postulating “other preferences” that can overrule my utility function would be a contradiction in terms.
The other two questions were more aimed at MugaSofer, who was the one differentiating between preference as a “bias” and as part of your utility function, and who introduced the whole “evil” thing.
The nearest I can come to making sense of your claim is that it’s some sort of imaginary Prisoner’s Dilemma: you can cooperate by saving a random child instead of your own, and in symmetric cases other parents can cooperate by saving your child instead of theirs.
However, even if you are into counterfactual bargaining, I am pretty sure almost no other parent would cooperate here, which makes defecting a no-brainer.
I suppose to be fair I should imagine a world in which every parent is brainwashed into valuing other children’s lives as much as their own (I am pretty sure it would take brainwashing). In this case (assuming you escaped the brainwashing so it’s still a legitimate decision) saving the other child might be the right thing to do. At that point, though, you’re arguably not optimizing for humans anymore.
My assertion is that all humans share utility—which is the standard assumption in ethics, and seems obviously true—and that parents are biased towards their children (for simple evopsych reasons,) leading them to choose their child when, objectively, their own ethics dictates they choose the other. The example given was that of a triage situation; you can only choose one, and need to decide who has he greater chance of survival.
Your moral philosophy in so far as it affects your actions is by definition already part of your utility function.
It makes no sense to say “my utility function dictates I want to do X, but because my own ethics says otherwise, I should do otherwise”, it’s a contradictio in terminis.
We should be very careful with ethical assumptions that seem “obviously true”. Especially when they are not (true as in “common”, it wouldn’t make sense otherwise) - parents choosing their own child over other children is an example of following a different ethical compass, one valuing their own children over others. You can neither claim that those parents are confused about their own utility function, nor that they are “wrong”. Your proposed “obviously true” ethical assumption is also based on “evopsych”. You’re trying to elevate an extreme altruist approach above others and calling it obviously true. For you, maybe, for the vast majority of e.g. parents? Not so much.
There is no epistemological truth in terminal values.
No.
Humans regularly act against their own ethics, whether due to misinformation or bias, akrasia, or cached thoughts about morality.
… are you seriously suggesting that, say, racists, are right about what they want? How then do they change when confronted with evidence that other races are, well, people? Perhaps I have misunderstood your point.
It seems obviously true that the moralities people implement are often internally inconsistent. It also seems obviously true that people can talk about imperatives they feel derive from one horn or the other of an inconsistent moral system, without either lying or being wrong as such.
The inconsistency might resolve itself with new information, but it’s going to inform any statements we make about the moral system it exists in until that information arrives.
I would advise you to read “cached thoughts” and then answer my question:
I am saying that the statement “a racist wants that which he/she wants” is tautologically true. There is no objective “right” or “wrong” when comparing utility functions, there is just “this utility function values X and Y, this other utility function values X and Z, they are compatible in respect to X, they are incompatible in respect to Y”.
Certainly what we value changes all the time. But that’s just change, it’s not becoming “less wrong” or “wronger”. Instead, it may be “more (/less) compatible with commonly shared elements of western utility functions” (which still fluctuate across time and culture, and species).
Except that humans share a utility function, which doesn’t change. You can persuade someone that murder is good, but you do it by persuading them that it leads to outcomes they already considered “good” and they were mistaken about the downsides of, well, killing people. Cached thoughts can result in actions that, objectively, are wrong. They are not wrong because this is some essential property of these actions, morality is in our minds, but we can still meaningfully say “this is wrong” just was we can say “this is a chair” or “there are five apples”. Eliezer’s latest sequence touches on this kind of meaningfulness. Other standard stuff worth reading in this context is “The Psychological Unity of Humankind” and “Coherent Extrapolated Volition”; and, well, the Metaethics Sequence.
Humans trivially don’t share a utility function, since they have differing preferences over world-states. I’m even pretty sure that individual people don’t have anything that we could call a reliable utility function, since we don’t have the cognitive juice to evaluate world-states in their totality and even tractable subsets of the world end up getting evaluated differently based on all sorts of random crap including, but not limited to, presentation order and how recently you’ve eaten.
CEV attempts to resolve people’s conflicting preferences by doing away with several human cognitive limitations, requiring reflective consistency, and applying resolution steps based on projected social interactions (at least, that’s how I’m reading “grew up farther together”), but these requirements (especially the latter) are underspecified in its present form. Even if they weren’t, CEV in its present form does not, nor does it try to, demonstrate that the entirety of the human moral landscape in fact coheres.
Humans trivially do share a utility function, since they change their beliefs consistently in response to argument. Of course, as with all other knowledge, self-knowledge and moral reasoning are hampered by biases, cached thoughts, and simple stupidity.
CEV, and for that matter The Psychological Unity of Humankind, are relevant without being themselves arguments. Have you, in fact, read the metaethics sequence? I ask for information as to how best to proceed.
...no offense, but I don’t think that word means what you think it means.
Non-pathological human ethics may or may not ultimately run off some consistent set of intrinsic affective associations. (Whether or not it does more or less reduces to the question of whether CEV is complete, which as I’ve said is currently unknown.) Even if true, this doesn’t imply a shared utility function within any useful domain.
Utility (in its simplest form) is nothing more or less than a preference ordering over some set of possible states, a utility function is one that maps those states to their preference ordering for a given agent, and in between those states and our hypothetical intrinsic associations there’s layers upon layers of bias and acculturation, probably enough to be effectively unique to the individual. I’ve be very surprised if we could find two people with exactly the same preferences over fully specified future states, though we’d probably find large chunks that looked quite similar.
Yes.
Good to know.
...huh?
The fact that morality is acted upon in different ways (due to your “layers” or simply mistaken beliefs about the world) doesn’t change the fact that it is there, underneath, and that this is the standard we work by to declare something “good” or “bad”. We aren’t perfect at it, but we can make a reasonable attempt. Just like, say, mathematics, or predicting the movement of planets.
Now we’re getting somewhere.
First, that’s not a utility function; see the edited version of my last comment. We have a tendency around here to use “utility function” as if it describes fundamental moral impulses, but I’d imagine that’s because we like to talk about AIs, for whom such a function can be written explicitly and for whom consistency between agents is no trouble. Neither of those conditions holds true for our messy meat brains.
That being said, I’m afraid the idea that there’s some uniform set of impulses on which all existing moralities are fundamentally based is more an article of faith than a statement of fact given the present state of knowledge. There’s clearly enough unity there for some moral concepts to (e.g.) be describable in language, but that’s a relatively weak criterion. Pathology gives the idea of strong consistency a lot of trouble, but even if you ignore that there’s simply not enough evidence to declare that it’s consistent enough to define as a single function covering all normal people; just off the top of my head, for example, it could easily be that parts of it sum as a polynomial, or something similar, for which the coefficients vary somewhat between people or populations.
Fair enough. What term would you prefer? I’ll use “morality” for now.
Quite the opposite, we can see that our morality exists unchanged regardless of beliefs by the fact that there are people who actually do have different moralities. As a vegetarian, I can tell you that a lot of people who believe eating meat is OK do so because they are mistaken about the environment; remove the mistake (by showing them how horrible conditions are in factory farms, for example) and they will see that eating meat is wrong (or at least that factory farming is wrong.) If they genuinely didn’t value the pain of animals, say, this would fail. No amount of argument will persuade Clippy that killing people is wrong.
You wouldn’t happen to have non-anecdotal evidence that this is actually the case, would you?
What, like a study of people showed images of slaughterhouses or something? Nope. To be honest, that’s kind of a terrible example. Racists work much better.
How about “moral architecture”?
I think I’d agree that most humans share roughly the same set of inputs to that architecture: hit most people on the head, and they’re likely to feel pain; humiliate them, and they’re likely to feel embarrassment. I doubt that the relative weightings of these traits are likely to remain identical between individuals, but if you factor that out I think we have a human commonality that I could get behind.
I suspect we’d differ in our opinion of acculturation’s role in defining certain categories (the pain of animals, for example) as morally significant, though. That strikes me as a level or two above anything I’d be comfortable calling a human universal.
Moral architecture sounds good.
I note that humans can empathise with pains they do not themselves feel.
Well, yeah. It’s not the greatest example, I suppose. How about racism? That’s usually my go-to for this sort of thing. I kill Jews because Jews are parasites that undermine civilization; you kill Nazis because they murder innocent people.
EDIT: I’m not actually Nazi, obviously.
Another situation that has some parallels and may be relevant to the discussion.
Helping starving kids is Good—that’s well understood. However, my upbringing and current gut feeling says that this is not unconditional. In particular, feeding starving kids is Good if you can afford it; but feeding other starving kids if that causes your own kids to starve is not good, and would be considered evil and socially unacceptable. i.e., that goodness of resource redistribution should depend on resource scarcity; and that hurting your in-group is forbidden even with good intentions.
It may be caused by the fact that I’m partially brought up by people that actually experienced starvation and have had their relatives starve to death (WW2 aftermath and all that), but I’d guess that their opinion is more fact-based than mine and that they definitely had put more thought into it than I have, so until/if I analyze it more, I probably should accept that prior.
That is so—though it depends on the actual chances; “much higher chance of survival” is different than “higher chance of survival”.
But my point is that:
a) I might [currently thinking] rationally desire that all of my in-group would adopt such a belief mode—I would have higher chances of survival if those close to me prefer me to a random stranger. And “belief-sets that we want our neighbors to have” are correlated with what we define as “good”.
b) As far as I understand, homo sapiens do generally actually have such an attitude—evolutionary psychology research and actual observations when mothers/caretakers have had to choose kids in fires/etc.
c) Duty may be a relevant factor/emotion. Even if the values were perfectly identical (say, the kids involved would be twins of a third party), if one was entrusted to me or I had casually accepted to watch him, I’d be strongly compelled to save that one first, even if the chances of survival would (to an extent) suggest otherwise. And for my own kids, naturally, I have a duty to take care of them unlike 99.999% other kids—even if I wouldn’t love them, I’d still have that duty.
My point is that duty, while worth encouraging throughout society, is screened off by most utilitarian calculations; as such it is a bias if, rationally, the other choice is superior.