That’s fine for the most part, but in that case do you really feel that same empathy for these proposed simulations? If all you care about is humans maybe you shouldn’t care about these simulations being killed anyway. They’re less like us than animals, they have no flesh and weren’t born of a mother, why do you care about them just because they make a false imitation of our thoughts?
More importantly though I wasn’t talking about human-centrism as a moral issue but a logical one. Racism is bad because it makes us form groups and mistreat people that are different from us. Racism is stupid on the other hand because it makes us inclined to think people of a different race are more different from ourselves than turns out to actually be the case. Similarly it’s the logical not the moral errors of human-centrism that are really relevant to the discussion. If there’s an ethical issue with killing simulations there’s an ethical issue with killing AIs. Resolve one and you can probably resolve the other. Whether you care or not about either problem is kind of beside the point.
That I also don’t morally support human-centrism is also kind of beside the point.
Racism is bad because it makes us form groups and mistreat people that are different from us.
Doesn’t mistreatment suppose there is some correct form of treatment, and wouldn’t a racist believe they are using the correct treatment?
That is, I don’t think this sentence is getting to the heart of why/if/when racism is bad. Your following sentence is closer but still not there- oftentimes, increasing differences between people is a winning move, not a stupid one.
Racism is bad because it makes us form groups and mistreat people that are different from us.
I suspect the causation goes the other way. I am looking for a study I recently read about that suggested this.
White subjects had difficulty recalling the specific content of what individual black actors in videos had said relative to how well they recalled what individual white actors had said. When videos of black actors were of them arguing for opposite sides of an issue, the subjects were able to match content to speakers equally well for black and white actors. The theory is that race was used as a proxy for group membership until something better came along. Once people were grouped by ideas, the black speakers were thought of as individuals.
The evolutionary story behind this is that people evolved with group politics being important, but almost never seeing someone of noticeably different race. It makes sense that we evolved mechanisms for dealing with groups in general and none for race in particular. It makes sense that in absence of anything better, we might group by appearance, and it makes sense that we would err to perceive irrelevant patterns/groups as the cost of never or rarely missing relevant patterns/groups.
That’s fine for the most part, but in that case do you really feel that same empathy for these proposed simulations?
Yes.
If all you care about is humans maybe you shouldn’t care about these simulations being killed anyway. They’re less like us than animals, they have no flesh and weren’t born of a mother, why do you care about them just because they make a false imitation of our thoughts?
Because I do—and I don’t want to change. (This is the same justification that I have for caring about humans, or myself.)
More importantly though I wasn’t talking about human-centrism as a moral issue but a logical one.
It is the logical problem that I reject. There is no inconsistency in being averse to racism but not averse to speciesism.
There is no inconsistency in being averse to racism but not averse to speciesism.
On reflection, this seems wrong. The fact that some in-group/out-group behavior is rational does not mean that in-group bias is rational. To put it slightly differently, killing a Klingon is wrong iff killing a human would be wrong in those circumstances.
If there’s an ethical issue with killing simulations there’s an ethical issue with killing AIs.
Doesn’t follow, for several reasons:
If the issue is with the termination of subjective experiences, and if we assume that that people-simulations have qualia (let’s grant it for the sake of argument) it still doesn’t follow that every optimization algorithm of sufficient calculational power also has qualia.
If the ethical issue is with violation of individuals’ rights, there’s nothing to prevent us from constructing only AIs that are only too happy to consent to be deleted; or indeed which strongly desire to be deleted eventually—but most people-simulation would presumably not want to die, since most people don’t want to die.
If the ethical issue is with violation of individuals’ rights, there’s nothing to prevent us from constructing only AIs that are only too happy to consent to be deleted; or indeed which strongly desire to be deleted eventually
(This is not to say I don’t consider it a potential ethical issue to be actively creating creatures that consent as a way to do things that would be otherwise abhorrent.)
If the ethical issue is with violation of individuals’ rights, there’s nothing to prevent us from constructing only AIs that are only too happy to consent to be deleted; or indeed which strongly desire to be deleted eventually—but most people-simulation would presumably not want to die, since most people don’t want to die.
Creating such entities would be just as immoral as creating a race of human-intelligence super-soldiers whose only purpose was to fight our wars for us.
Creating such entities would be just as immoral as creating a race of human-intelligence super-soldiers whose only purpose was to fight our wars for us.
I feel that this sort of response (filled with moral indignation, but no actual argument) is far beneath the standards of LessWrong.
First of all, I’m talking about human-level (or superhuman-level) intelligence, not human intelligence—which would imply human purpose, human emotion, human utility functions etc. I’m talking about an optimization process which is atleast as good as humans are in said optimization—it need not have any sense of suffering, it need not have any sense of self or subjective experience even, and certainly not any sense that it needs to protect said self. Those are all evolved instincts in humans.
Secondly, can you explain why you feel the creation of such super-soldiers would be immoral? And immoral as opposed to what, sending people to die that do not want to die? That would prefer to be somewhere else, and suffer for being there?
Thirdly, I would like to know if you’re using some deontology or virtue-ethics to derive your sense of morality. If you’re using consequentialism though, I think your falling into the trap of anthroporphizing such intelligences—as if their “lives” would somehow be in conflict with their minds’ goalset; as soldier’s lives tend to be in conflict with their own goalset. You may just as well condemn as immoral the creation of children whose “only purpose” is to live lives full of satisfaction, discovery, creativity, learning, productivity, happiness, love, pleasure, and joy—just because they don’t possess the purpose of paperclipping the universe.
There is something about humans that make them objects of moral concern. It isn’t the ability to feel pain, because cows can feel pain. For the same reason, it isn’t experiencing sensation. And it isn’t intelligence, because dolphins are pretty smart.
I’m not trying to evoke souls or other non-testable concepts. Personally, I suspect the property that creates moral concern is related to our ability to think recursively (i.e. make and comprehend meta-statements). Whatever the property of moral concern is based on, it requires me to say things like: “It is wrong to kill a Klingon iff it would be wrong to kill a human in similar circumstances.”
If you come across a creature of moral concern in the wild, and it wants to die (assuming no thinking defects like depression), then helping may not be immoral. But if you create a creature that way, you can’t ignore that you caused the desire to die in that creature.
One might think that it is possible to create human-level intelligence creatures that are not entitled to moral concern because they lack the relevant properties. That’s not incoherent, but every human-intelligent species in our experience is entitled to moral concern (yes, I’m aware that the sample size is extremely small).
I think your falling into the trap of anthroporphizing such intelligences—as if their “lives” would somehow be in conflict with their minds’ goalset; as soldier’s lives tend to be in conflict with their own goalset.
A rational soldier’s life is not in conflict with her goalset, only with propagation of her genes.
You may just as well condemn as immoral the creation of children whose “only purpose” is to live lives full of satisfaction, discovery, creativity, learning, productivity, happiness, love, pleasure, and joy.
Morality is not written in the equations of the universe, but I think it a fair summary of the morality we currently follow as attempting to live to the highest and best of potential. And it is totally fair for me to point out a moral position inconsistent with that morality.
There is something about humans that make them objects of moral concern. It isn’t the ability to feel pain, because cows can feel pain. For the same reason, it isn’t experiencing sensation. And it isn’t intelligence, because dolphins are pretty smart.
I have moral concern for cows and dolphins both (much more for the latter).
We’re not communicating here. You’ve not responded to any of my questions, just launched into an essay that just assumes new points that I would not concede.
A rational soldier’s life is not in conflict with her goalset, only with propagation of her genes.
Does a rational soldier enjoy being shot at? If she doesn’t enjoy that, then her life is atleast somewhat in conflict with her preferences; she may have deeper preferences (e.g. ‘defending her nation’) that outweigh this, but this at best makes being shot at a necessary evil, it doesn’t turn it into a delight.
If we could have soldiers that enjoy being shot at, much like players of shoot-em-up games do, then their lives wouldn’t be at all in conflict with their desires.
Morality is not written in the equations of the universe, but I think it a fair summary of the morality we currently follow is attempting to live to the highest and best of potential.
“Highest and best” according to who? And attempting to live personally to the highest and best of potential, or forcing others to live to such?
I eat beef. And if I saw a dolphin about to be killed be a shark and could save it easily, I won’t think I made an immoral choice by allowing the shark attack. But my answers are different for people.
Does a rational soldier enjoy being shot at? If she doesn’t enjoy that, then her life is at least somewhat in conflict with her preferences; she may have deeper preferences (e.g. ‘defending her nation’) that outweigh this, but this at best makes being shot at a necessary evil, it doesn’t turn it into a delight.
I don’t think it makes sense to analyze the morality of considerations leading to a choice, because individual values conflict all the time. Alice would prefer a world without enemies who shot at her. But she believes that it is immoral to let barbarians win.. So she chooses to be a soldier. That choice is the subject of moral analysis, not her decision-making process.
“Highest and best” according to who?
That’s an excellent question. All I can say is that you have to ground morality somewhere. And there is no reason that “ought” statements will universalize.
And attempting to live personally to the highest and best of potential, or forcing others to live to such?
If we’re still talking about parenting, then I assert that children aren’t rational. Otherwise, I don’t think I should force a particular kind morality. Which loops right back around to noticing that different moralities can come into conflict. And balancing conflicting moralities is hard (perhaps undecidable in principle).
So do I. That doesn’t mean I don’t have any moral concern for cows.
And if I saw a dolphin about to be killed be a shark and could save it easily, I won’t think I made an immoral choice by allowing the shark attack.
You’re putting improper weight on one side of equation by putting yourself in a position where you’d have to intervene (perhaps with violence enough to kill the shark, and certainly depriving it of a meal ) if you had a moral concern.
Let’s change the equation a bit: You are given a box, where you can press a button and get one dollar every time you press it, but a dolphin gets tortured to death if you do so. Do you press the button? I wouldn’t.
I don’t think it makes sense to analyze the morality of considerations leading to a choice, because individual values conflict all the time.
You’re drifting out of the issue, which is not about choices, but about preferences.
In front of us are two buttons. When the Blue button is pushed, a cow is killed. When the Red button is pushed, a human is killed. What price for each button? People push Blue every workday, and the price is some decent but not extravagant hourly wage. There are enormous and complicated theories about when to push Red. For example, there is a whole category of theories about “just war” that aim to decide when generals can push Red. What explains the difference in price between Blue and Red? Cows are not creatures of moral concern in the way that humans are. That’s all I mean by “creature of moral concern.”
Ok, back to torture. Because cows are not creatures of moral concern, the reason not to torture them is different from the reason not to torture people. We shouldn’t torture people for the same reason we shouldn’t kill them. But we shouldn’t torture cows because it shows some lack of concern for causing pain, which seems strongly correlated with willingness to cause harm to people.
I don’t think it makes sense to analyze the morality of considerations leading to a choice, because individual values conflict all the time.
You’re drifting out of the issue, which is not about choices, but about preferences.
I agree that our choices can conflict with some of our values. How does that show that we are morally permitted to create creatures of moral concern that want to die?
People push Blue every workday, and the price is some decent but not extravagant hourly wage.
But those people, by pushing the button, are putting tasty food on the plates of others. Disentangling this from everything seems tricky at best: if the animal killed is not going to be used to fulfill human needs and wants, then injunctions against waste might be weighing in...
True. But that’s different in kind from the reasons we use not to kill humans. And my only point was that basically all considerations about how to treat animals are different in kind from considerations about how to treat humans.
Alice and Bob are eating together, and Bob doesn’t finish his meal. “What a waste,” says Alice. As they leave the restaurant, someone tells them that a young, promising medical researcher has died. “What a waste,” says Bob.
In both utterances, “waste” is properly understood as waste(something). Alice meant something like waste(food). Bob meant something like waste(potential). Alice’s reference is material, Bob’s is conceptual. Those seem like clearly different kinds to me.
Yes, you could make a scale and place both references on that scale. Maybe the waste Bob noted really is a million times worse than the waste Alice noted. I don’t think that enhances understanding. In fact, I think that perspective misses something about the difference between what Alice said and what Bob said.
Is the following a reasonable paraphrase of your most recent points?
There is no hard delineation between differences in kind and differences in degree in the territory, there are only situations where one map or the other is more useful.
Assuming, that it is coherent to talk about the “territory” of morality, I think I agree with your paraphrase. But I expect that certain maps are likely to be useful much more often.
I think that classifying types of reasons actually used improves our understanding because it cuts the world at its joints. It’s subject to the same type of criticism that biological taxonomy might be subject to. And if you go abstract enough, things that look like different kind merge to become sub-examples of some larger kind. But at some point, you lose the ability to say things that are both true and useful. Like trying to say something practically useful about the differences between two species without invoking a lower category than Life.
“But we shouldn’t torture cows because it shows some lack of concern for causing pain, which seems strongly correlated with willingness to cause harm to people.”
So, let me change the question: “You are given a box, where you can press a button and get one dollar every time you press it, but a dolphin gets killed painlessly whenever you do so. Do you press the button?”
Cows are not creatures of moral concern in the way that humans are.
This is so fuzzy as to be pretty much meaningless.
I’ve already told you they’re of moral concern to me.
How does that show that we are morally permitted to create creatures of moral concern that want to die?
Since you seem to define “moral concern” as “those things that shouldn’t die”, then of course we wouldn’t be “morally permitted”.
But that’s not a commonly shared definition for moral concern—nor a very consistent one.
I probably would press the button at about the price people are paid to butcher cows. Somewhere thereabout.
This is so fuzzy as to be pretty much meaningless.
You’re right. There isn’t a word for what I’m getting at, so I used a slightly different phrase. Ok, I’ll deconstruct. I assert there is a moral property of creatures, which I’ll call blicket.
An AI whose utility function does not respond to the preferences of blicket creatures is not Friendly. An AI whose utility function does not respond to the preferences of non-blicket creatures might be Friendly. By way of example, humans are blicket creatures. Klingons are blicket creatures (if they existed). Cows are not blicket creatures.
What makes a creature have blicket? I looks at the moral category, and see that it’s a property of the creature. It isn’t ability to feel pain. Or ability to experience sensation. And it isn’t intelligence.
One might assert that blicket doesn’t reflect any moral category. I respond by saying that there’s something that justifies not harming others even when decision-theory cooperate/defect decisions are insufficient. One might assert that blicket does not exist. I respond that the laws of physics don’t have a term for morality, but we still follow morality.
Ok, enough definition. I assert that creating a blicket creature that wants to die is immoral, absent moral circumstances approximately as compelling as those that justify killing a blicket creature.
I probably would press the button at about the price people are paid to butcher cows. Somewhere thereabout.
I don’t know what cow-butchering currently entails, but they’d probably be paid significantly less if they only had to press a button.
Also, I’m sorry but I really can’t think of a way in which this response is an honest valuation of how much money you’d accept in order to do this task. It sounds as if you’re actually saying “I’ll do it for whatever money is socially acceptable for me to do it for”. So in sort—if you lived in a cow-hating culture where people paid money for the privilege of killing a cow, you’d be willing to pay money; if you lived in a cow-revering culture where people would never kill a cow (e.g. India ), you’d not do it for even a million.
Is this all you’re saying—that you’d choose to obey societal norms on this matter? This doesn’t tell me much about your own moral instinct, independent of societal approval thereof; what would society do if you had the role of instructing it on the matter.
I assert there is a moral property of creatures, which I’ll call blicket.
Okay, but my own view on the matter is that “blicket” is a continuum—most properties of creatures, both physical and mental, are continuums after all. Creatures probably range from having zero blickets (amoebas) to a couple blickets (reptiles) to lots of blickets (apes, dolphins) to us (the current maximum of blickets).
What makes a creature have blicket? I look at the moral category, and see that it’s a property of the creature.
I think that’s a classic example of mind-projection fallacy. I think the reality isn’t creature.numberOfBlickets, but rather numberOfBlickets(moral agent, creature);
I don’t know what cow-butchering currently entails, but they’d probably be paid significantly less if they only had to press a button.
It’s an assembly-line process. Cows are actually killed by blood loss, but before that happens they’re typically (kosher meat being an exception) stunned by electric shock or pithed with a captive bolt pistol. Fairly mechanical; I imagine a pushbutton process would pay less, but mainly because it’d then be unskilled labor and its operator wouldn’t have to deal with various cow fluids at close proximity.
Okay, but my own view on the matter is that “blicket” is a continuum—most properties of creatures, both physical and mental, are continuums after all. Creatures probably range from having zero blickets (amoebas) to a couple blickets (reptiles) to lots of blickets (apes, dolphins) to us (the current maximum of blickets).
Do you think that an AI that does not take into account the preferences of cows is necessarily unFriendly (using EY’s definition)? If yes, I don’t understand why you think it is acceptable to eat beef.
I think that’s a classic example of mind-projection fallacy.
That’s such a weird interpretation of what I’m saying, because I’ve consistently acknowledged that blicket is not written in the laws of physics. The properties that lead me to ascribe blicket to a creature would probably not motivate uFAI to treat that creature well.
I look at the moral category, and see that it’s a property of the creature.
Sexy(me, Jennifer Aniston) != sexy(me, Brad Pitt). Isn’t some of that difference attributable to different properties of Jennifer and Brad?
In the original article, EY says that FAI should not simulate a human because the simulated person would be sufficiently real that stopping the simulation would be unFriendly. You seem to think that nothing would be wrong with a FAI simulating an AI that wanted to die. It may well be that AIs lack blicket. But an AI does not lack blicket simply because it wants to die.
Do you think that an AI that does not take into account the preferences of cows is necessarily unFriendly (using EY’s definition)?
If I remember correctly, EY talks about Friendliness in regards to humanity, not in regards to cows—in that case the AI would take the preferences of cows into account only to the extent that the Coherent Extrapolated Volition of humanity would take it into account, no more, no less.
If yes, I don’t understand why you think it is acceptable to eat beef.
For the sake of not pretending to misunderstand you I’ll assume you mean “I don’t understand why you think it’s acceptable to kill cows in order to have their meat.”, we’re not talking about already butchered cow whose meat would go to waste if I didn’t eat it.
For starters, because cow-meat is yummy, and the preferences of humans severely outweigh the preferences of cows in my mind.
Now dolphin-meat or ape-meat, I would not eat, and I would like to to ban the killing of dolphins and apes both (outside of medical testing in the cases of apes).
I think that’s a classic example of mind-projection fallacy.
That’s such a weird interpretation of what I’m saying, because I’ve consistently acknowledged that blicket is not written in the laws of physics.
This means less than you seem to think, because after all concepts like “brains” or “genes” or for that matter even “atoms” and “molecules” aren’t written in the laws of physics either. So all I got from this statement of yours is that you think moral isn’t located at the most fundamental level of reality (the one occupied by quantum amplitute configurations)
And to counteract this out you made statemens like “it’s a property of the creature.”
Sexy(me, Jennifer Aniston) != sexy(me, Brad Pitt). Isn’t some of that difference attributable to different properties of Jennifer and Brad?
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Such miscommunication could have been avoided if you were a bit more precise in your sentences.
You seem to think that nothing would be wrong with a FAI simulating an AI that wanted to die.
Not quite. I’ve effectively said that it wouldn’t necessarily be wrong.
But an AI does not lack blicket simply because it wants to die.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Let me remind people again that I’m not talking about the sort of “wanting to die” that a suicidal human being would possess—driven by grief or despair or guilt or hopeless tedium grinding down his soul.
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Is primeness a property of a heap of five pebbles?
And is it a property of you or the pebbles that you don’t care about prime-pebbled heaps?
Okay, but my own view on the matter is that “blicket” is a continuum—most properties of creatures, both physical and mental, are continuums after all. Creatures probably range from having zero blickets (amoebas) to a couple blickets (reptiles) to lots of blickets (apes, dolphins) to us (the current maximum of blickets).
How is this use of the term different from the term “moral concern”? I’m trying to talk about creatures we give sufficient moral weight that the type of justifications for their treatment change. Killing cows takes different (and lesser) justification than killing humans.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Is it fair to say that you don’t think it makes any moral difference whether you made the AI or found it instead?
Do you think that an AI that does not take into account the preferences of cows is necessarily unFriendly (using EY’s definition)? If yes, I don’t understand why you think it is acceptable to eat beef.
Ah, but taking into to account is not the same as following blindly! Surely it’s possible that the AI will consider their preferences and conclude that our having beef is more important. But in other situations their preferences will be relevant.
That’s an interesting point. But it’s hard for me to conceive of a morality based entirely on decision theory that doesn’t essential resemble act utilitarianism. Maybe my understanding of decision theory is insufficient
Act utilitarianism bothers me as a moral theory. I can’t demonstrate that it is false, but it seems to me that the perspective of act utilitarianism is not consistent with how we ordinarily analyze moral decisions. But maybe I’m excessively infected with folk moral philosophy.
Creating the corn would be immoral. Creating the pig would be moral—and delicious!
I think it a fair summary of the morality we currently follow as attempting to live to the highest and best of potential
That seems like a fair summary of all moral systems according to their own standards. If so, that wouldn’t tell us about the moral system since it would be true of all of them.
I disagree. Otherwise, prevention of suicide of the depressed is difficult to justify.
That seems like a fair summary of all moral systems according to their own standards.
On the one hand, I agree that it doesn’t narrow down the universe of acceptable moralities very much. But consider an absolute monarchist morality: Alexander’s potential is declared to be monarch of the nation, while Ivan’s is declared to be serf. All decided at birth, before knowing anything about either person. That’s not a morality that values everyone reaching their potential.
Otherwise, prevention of suicide of the depressed is difficult to justify.
Assuming one has the intuitions that creating the pig would be moral and not preventing suicide of the depressed is immoral, one may be wrong in considering them are analogous. But if they are, you gave no reason to prefer giving up the one intuition instead of the other.
I don’t think they are analogous. Depression involves unaligned preferences, perhaps always, but at least very often. If the pig’s system 1 mode of thinking wants him eaten, and system 2 mode of thinking wants him eaten, and the knife feels good to him, and his family would be happy to have him eaten, etc. all is alligned and we don’t have to solve the nature of preferences and how to rank them to say the pig’s creation and death are fine.
It seems to me that creating the pig is analogous to creating suicidal depression in a human who is not depressed.
you gave no reason to prefer giving up the one intuition instead of the other.
As a starting point, a moral theory should add up to normal. I’m not saying it’s an iron law (people once thought chattel slavery was morally normal). But the burden is on justifying the move away from normal.
By manipulation of environment and social engineering, the super-soldiers think that their only reason for existence is fighting war on our behalf. Questioning the purpose of the war is suppressed, as are non-productive impulses like art, scientific curiosity, or socializing. In short, Anti-Fun.
I’m not saying it would be possible to create these conditions in a human-intelligence population. I’m saying it would be immoral to try.
So they would naturally feel differently about fighting in wars with different causes and justifications? If not, why suppress it?
non-productive impulses like art, scientific curiosity, or socializing.
If they have desires to do these things then the reason they were created may have been to fight, but this is not their “only purpose” from their perspective.
Yes, what’s immoral is the shoehorning. They would think that there is more to life than what they do, if only they were allowed freedom of thought.
One might think that it is possible to create human-level intelligence creatures that won’t think that way. But we’ve never seen such a species (yes, very small sample size), and I’m not convinced it is possible.
So in short you aren’t talking about a race of supersoldiers whose only purpose is really to fight wars for us, you’re talking about a race of supersoldiers who are pressured into believing that their only purpose is to fight wars for us, against their actual inner natures that would make them e.g. peaceful artists or musicians instead.
At this point, we’re not talking about remotely the same thing, we’re talking about completely opposite things—as opposite as fulfilling your true utility function and being forced to go against it—as opposite as Fun and Anti-Fun.
They’re less like us than animals, they have no flesh and weren’t born of a mother, why do you care about them just because they make a false imitation of our thoughts?
Because in a information theoretic sense they may be more similar to my mind than the minds of most animals.
Ok, forget the poor analogy with racism, why racism is bad is a whole separate issue that I had no intention to get into. Let me try and just explain my point better.
Human-centrism is a bias in thinking which makes us assume things like “The earth is the centre of the universe”, “Only humans have consciousness” and “Morality extends to things approximately as far as they seem like humans”. I personally think it is only through this bias that we would worry about the possible future murder of human simulations before we worry about the possible future murder of the AIs intelligent enough to simulate a human in the first place
Human-centrism as fighting for our tribe and choosing not to respect the rights of AIs is a different issue. Choosing not to respect the rights of AIs is different from failing to appreciate the potential existence of those rights.
Choosing not to respect the rights of AIs is different from failing to appreciate the potential existence of those rights.
This sentence seems to imply a deontological moral framework, where rights and rules are things-by-themselves, as opposed to guidelines which help a society optimize whatever-it-is-it-wants-to-optimize. There do exist deontologists in LessWrong, but many of us are consequentialists instead.
Can’t I use the word “rights” without losing my status as a consquentialist? I simply use the concept of a “being with a right to live” as a shortening for “a being for which murdering would, in the majority of circumstances and all else being equal, be very likely to be a poor moral choice”. You can respect the rights of something without holding a deontological view that rights are somehow the fundamental definition of morality.
That’s fine for the most part, but in that case do you really feel that same empathy for these proposed simulations? If all you care about is humans maybe you shouldn’t care about these simulations being killed anyway. They’re less like us than animals, they have no flesh and weren’t born of a mother, why do you care about them just because they make a false imitation of our thoughts?
More importantly though I wasn’t talking about human-centrism as a moral issue but a logical one. Racism is bad because it makes us form groups and mistreat people that are different from us. Racism is stupid on the other hand because it makes us inclined to think people of a different race are more different from ourselves than turns out to actually be the case. Similarly it’s the logical not the moral errors of human-centrism that are really relevant to the discussion. If there’s an ethical issue with killing simulations there’s an ethical issue with killing AIs. Resolve one and you can probably resolve the other. Whether you care or not about either problem is kind of beside the point.
That I also don’t morally support human-centrism is also kind of beside the point.
Doesn’t mistreatment suppose there is some correct form of treatment, and wouldn’t a racist believe they are using the correct treatment?
That is, I don’t think this sentence is getting to the heart of why/if/when racism is bad. Your following sentence is closer but still not there- oftentimes, increasing differences between people is a winning move, not a stupid one.
I suspect the causation goes the other way. I am looking for a study I recently read about that suggested this.
White subjects had difficulty recalling the specific content of what individual black actors in videos had said relative to how well they recalled what individual white actors had said. When videos of black actors were of them arguing for opposite sides of an issue, the subjects were able to match content to speakers equally well for black and white actors. The theory is that race was used as a proxy for group membership until something better came along. Once people were grouped by ideas, the black speakers were thought of as individuals.
The evolutionary story behind this is that people evolved with group politics being important, but almost never seeing someone of noticeably different race. It makes sense that we evolved mechanisms for dealing with groups in general and none for race in particular. It makes sense that in absence of anything better, we might group by appearance, and it makes sense that we would err to perceive irrelevant patterns/groups as the cost of never or rarely missing relevant patterns/groups.
Yes.
Because I do—and I don’t want to change. (This is the same justification that I have for caring about humans, or myself.)
It is the logical problem that I reject. There is no inconsistency in being averse to racism but not averse to speciesism.
On reflection, this seems wrong. The fact that some in-group/out-group behavior is rational does not mean that in-group bias is rational. To put it slightly differently, killing a Klingon is wrong iff killing a human would be wrong in those circumstances.
Doesn’t follow, for several reasons:
If the issue is with the termination of subjective experiences, and if we assume that that people-simulations have qualia (let’s grant it for the sake of argument) it still doesn’t follow that every optimization algorithm of sufficient calculational power also has qualia.
If the ethical issue is with violation of individuals’ rights, there’s nothing to prevent us from constructing only AIs that are only too happy to consent to be deleted; or indeed which strongly desire to be deleted eventually—but most people-simulation would presumably not want to die, since most people don’t want to die.
Indeed!
(This is not to say I don’t consider it a potential ethical issue to be actively creating creatures that consent as a way to do things that would be otherwise abhorrent.)
Creating such entities would be just as immoral as creating a race of human-intelligence super-soldiers whose only purpose was to fight our wars for us.
I feel that this sort of response (filled with moral indignation, but no actual argument) is far beneath the standards of LessWrong.
First of all, I’m talking about human-level (or superhuman-level) intelligence, not human intelligence—which would imply human purpose, human emotion, human utility functions etc. I’m talking about an optimization process which is atleast as good as humans are in said optimization—it need not have any sense of suffering, it need not have any sense of self or subjective experience even, and certainly not any sense that it needs to protect said self. Those are all evolved instincts in humans.
Secondly, can you explain why you feel the creation of such super-soldiers would be immoral? And immoral as opposed to what, sending people to die that do not want to die? That would prefer to be somewhere else, and suffer for being there?
Thirdly, I would like to know if you’re using some deontology or virtue-ethics to derive your sense of morality. If you’re using consequentialism though, I think your falling into the trap of anthroporphizing such intelligences—as if their “lives” would somehow be in conflict with their minds’ goalset; as soldier’s lives tend to be in conflict with their own goalset. You may just as well condemn as immoral the creation of children whose “only purpose” is to live lives full of satisfaction, discovery, creativity, learning, productivity, happiness, love, pleasure, and joy—just because they don’t possess the purpose of paperclipping the universe.
There is something about humans that make them objects of moral concern. It isn’t the ability to feel pain, because cows can feel pain. For the same reason, it isn’t experiencing sensation. And it isn’t intelligence, because dolphins are pretty smart.
I’m not trying to evoke souls or other non-testable concepts. Personally, I suspect the property that creates moral concern is related to our ability to think recursively (i.e. make and comprehend meta-statements). Whatever the property of moral concern is based on, it requires me to say things like: “It is wrong to kill a Klingon iff it would be wrong to kill a human in similar circumstances.”
If you come across a creature of moral concern in the wild, and it wants to die (assuming no thinking defects like depression), then helping may not be immoral. But if you create a creature that way, you can’t ignore that you caused the desire to die in that creature.
One might think that it is possible to create human-level intelligence creatures that are not entitled to moral concern because they lack the relevant properties. That’s not incoherent, but every human-intelligent species in our experience is entitled to moral concern (yes, I’m aware that the sample size is extremely small).
A rational soldier’s life is not in conflict with her goalset, only with propagation of her genes.
Morality is not written in the equations of the universe, but I think it a fair summary of the morality we currently follow as attempting to live to the highest and best of potential. And it is totally fair for me to point out a moral position inconsistent with that morality.
I have moral concern for cows and dolphins both (much more for the latter).
We’re not communicating here. You’ve not responded to any of my questions, just launched into an essay that just assumes new points that I would not concede.
Does a rational soldier enjoy being shot at? If she doesn’t enjoy that, then her life is atleast somewhat in conflict with her preferences; she may have deeper preferences (e.g. ‘defending her nation’) that outweigh this, but this at best makes being shot at a necessary evil, it doesn’t turn it into a delight.
If we could have soldiers that enjoy being shot at, much like players of shoot-em-up games do, then their lives wouldn’t be at all in conflict with their desires.
“Highest and best” according to who? And attempting to live personally to the highest and best of potential, or forcing others to live to such?
I eat beef. And if I saw a dolphin about to be killed be a shark and could save it easily, I won’t think I made an immoral choice by allowing the shark attack. But my answers are different for people.
I don’t think it makes sense to analyze the morality of considerations leading to a choice, because individual values conflict all the time. Alice would prefer a world without enemies who shot at her. But she believes that it is immoral to let barbarians win.. So she chooses to be a soldier. That choice is the subject of moral analysis, not her decision-making process.
That’s an excellent question. All I can say is that you have to ground morality somewhere. And there is no reason that “ought” statements will universalize.
If we’re still talking about parenting, then I assert that children aren’t rational. Otherwise, I don’t think I should force a particular kind morality. Which loops right back around to noticing that different moralities can come into conflict. And balancing conflicting moralities is hard (perhaps undecidable in principle).
So do I. That doesn’t mean I don’t have any moral concern for cows.
You’re putting improper weight on one side of equation by putting yourself in a position where you’d have to intervene (perhaps with violence enough to kill the shark, and certainly depriving it of a meal ) if you had a moral concern.
Let’s change the equation a bit: You are given a box, where you can press a button and get one dollar every time you press it, but a dolphin gets tortured to death if you do so. Do you press the button? I wouldn’t.
You’re drifting out of the issue, which is not about choices, but about preferences.
In Milliways, Ameglian Major Cow have moral concern for you!
Let’s leave torture aside for a moment.
In front of us are two buttons. When the Blue button is pushed, a cow is killed. When the Red button is pushed, a human is killed. What price for each button? People push Blue every workday, and the price is some decent but not extravagant hourly wage. There are enormous and complicated theories about when to push Red. For example, there is a whole category of theories about “just war” that aim to decide when generals can push Red. What explains the difference in price between Blue and Red? Cows are not creatures of moral concern in the way that humans are. That’s all I mean by “creature of moral concern.”
Ok, back to torture. Because cows are not creatures of moral concern, the reason not to torture them is different from the reason not to torture people. We shouldn’t torture people for the same reason we shouldn’t kill them. But we shouldn’t torture cows because it shows some lack of concern for causing pain, which seems strongly correlated with willingness to cause harm to people.
I agree that our choices can conflict with some of our values. How does that show that we are morally permitted to create creatures of moral concern that want to die?
But those people, by pushing the button, are putting tasty food on the plates of others. Disentangling this from everything seems tricky at best: if the animal killed is not going to be used to fulfill human needs and wants, then injunctions against waste might be weighing in...
True. But that’s different in kind from the reasons we use not to kill humans. And my only point was that basically all considerations about how to treat animals are different in kind from considerations about how to treat humans.
I am not at all confident that I can intuitively distinguish a difference in kind from a massive difference in degree.
In both utterances, “waste” is properly understood as waste(something). Alice meant something like waste(food). Bob meant something like waste(potential). Alice’s reference is material, Bob’s is conceptual. Those seem like clearly different kinds to me.
Yes, you could make a scale and place both references on that scale. Maybe the waste Bob noted really is a million times worse than the waste Alice noted. I don’t think that enhances understanding. In fact, I think that perspective misses something about the difference between what Alice said and what Bob said.
Is the following a reasonable paraphrase of your most recent points?
Assuming, that it is coherent to talk about the “territory” of morality, I think I agree with your paraphrase. But I expect that certain maps are likely to be useful much more often.
I think that classifying types of reasons actually used improves our understanding because it cuts the world at its joints. It’s subject to the same type of criticism that biological taxonomy might be subject to. And if you go abstract enough, things that look like different kind merge to become sub-examples of some larger kind. But at some point, you lose the ability to say things that are both true and useful. Like trying to say something practically useful about the differences between two species without invoking a lower category than Life.
So, let me change the question: “You are given a box, where you can press a button and get one dollar every time you press it, but a dolphin gets killed painlessly whenever you do so. Do you press the button?”
This is so fuzzy as to be pretty much meaningless.
I’ve already told you they’re of moral concern to me.
Since you seem to define “moral concern” as “those things that shouldn’t die”, then of course we wouldn’t be “morally permitted”.
But that’s not a commonly shared definition for moral concern—nor a very consistent one.
I probably would press the button at about the price people are paid to butcher cows. Somewhere thereabout.
You’re right. There isn’t a word for what I’m getting at, so I used a slightly different phrase. Ok, I’ll deconstruct. I assert there is a moral property of creatures, which I’ll call blicket.
An AI whose utility function does not respond to the preferences of blicket creatures is not Friendly. An AI whose utility function does not respond to the preferences of non-blicket creatures might be Friendly. By way of example, humans are blicket creatures. Klingons are blicket creatures (if they existed). Cows are not blicket creatures.
What makes a creature have blicket? I looks at the moral category, and see that it’s a property of the creature. It isn’t ability to feel pain. Or ability to experience sensation. And it isn’t intelligence.
One might assert that blicket doesn’t reflect any moral category. I respond by saying that there’s something that justifies not harming others even when decision-theory cooperate/defect decisions are insufficient. One might assert that blicket does not exist. I respond that the laws of physics don’t have a term for morality, but we still follow morality.
Ok, enough definition. I assert that creating a blicket creature that wants to die is immoral, absent moral circumstances approximately as compelling as those that justify killing a blicket creature.
I don’t know what cow-butchering currently entails, but they’d probably be paid significantly less if they only had to press a button.
Also, I’m sorry but I really can’t think of a way in which this response is an honest valuation of how much money you’d accept in order to do this task. It sounds as if you’re actually saying “I’ll do it for whatever money is socially acceptable for me to do it for”. So in sort—if you lived in a cow-hating culture where people paid money for the privilege of killing a cow, you’d be willing to pay money; if you lived in a cow-revering culture where people would never kill a cow (e.g. India ), you’d not do it for even a million.
Is this all you’re saying—that you’d choose to obey societal norms on this matter? This doesn’t tell me much about your own moral instinct, independent of societal approval thereof; what would society do if you had the role of instructing it on the matter.
Okay, but my own view on the matter is that “blicket” is a continuum—most properties of creatures, both physical and mental, are continuums after all. Creatures probably range from having zero blickets (amoebas) to a couple blickets (reptiles) to lots of blickets (apes, dolphins) to us (the current maximum of blickets).
I think that’s a classic example of mind-projection fallacy. I think the reality isn’t creature.numberOfBlickets, but rather numberOfBlickets(moral agent, creature);
It’s an assembly-line process. Cows are actually killed by blood loss, but before that happens they’re typically (kosher meat being an exception) stunned by electric shock or pithed with a captive bolt pistol. Fairly mechanical; I imagine a pushbutton process would pay less, but mainly because it’d then be unskilled labor and its operator wouldn’t have to deal with various cow fluids at close proximity.
Do you think that an AI that does not take into account the preferences of cows is necessarily unFriendly (using EY’s definition)? If yes, I don’t understand why you think it is acceptable to eat beef.
That’s such a weird interpretation of what I’m saying, because I’ve consistently acknowledged that blicket is not written in the laws of physics. The properties that lead me to ascribe blicket to a creature would probably not motivate uFAI to treat that creature well.
Sexy(me, Jennifer Aniston) != sexy(me, Brad Pitt). Isn’t some of that difference attributable to different properties of Jennifer and Brad?
In the original article, EY says that FAI should not simulate a human because the simulated person would be sufficiently real that stopping the simulation would be unFriendly. You seem to think that nothing would be wrong with a FAI simulating an AI that wanted to die. It may well be that AIs lack blicket. But an AI does not lack blicket simply because it wants to die.
If I remember correctly, EY talks about Friendliness in regards to humanity, not in regards to cows—in that case the AI would take the preferences of cows into account only to the extent that the Coherent Extrapolated Volition of humanity would take it into account, no more, no less.
For the sake of not pretending to misunderstand you I’ll assume you mean “I don’t understand why you think it’s acceptable to kill cows in order to have their meat.”, we’re not talking about already butchered cow whose meat would go to waste if I didn’t eat it.
For starters, because cow-meat is yummy, and the preferences of humans severely outweigh the preferences of cows in my mind.
Now dolphin-meat or ape-meat, I would not eat, and I would like to to ban the killing of dolphins and apes both (outside of medical testing in the cases of apes).
This means less than you seem to think, because after all concepts like “brains” or “genes” or for that matter even “atoms” and “molecules” aren’t written in the laws of physics either. So all I got from this statement of yours is that you think moral isn’t located at the most fundamental level of reality (the one occupied by quantum amplitute configurations)
And to counteract this out you made statemens like “it’s a property of the creature.”
Ofcourse, but you said “it’s a property of the creature”—you didn’t say “it’s partially a property of the creature”, or “it’s a property of the relationship between the creature and me”.
Such miscommunication could have been avoided if you were a bit more precise in your sentences.
Not quite. I’ve effectively said that it wouldn’t necessarily be wrong.
I never said it would lack blicket. Blicket would make me want to help a creature achieve its aspirations, which in this context it would mean helping the AI to die.
Let me remind people again that I’m not talking about the sort of “wanting to die” that a suicidal human being would possess—driven by grief or despair or guilt or hopeless tedium grinding down his soul.
Is primeness a property of a heap of five pebbles?
And is it a property of you or the pebbles that you don’t care about prime-pebbled heaps?
How is this use of the term different from the term “moral concern”? I’m trying to talk about creatures we give sufficient moral weight that the type of justifications for their treatment change. Killing cows takes different (and lesser) justification than killing humans.
Is it fair to say that you don’t think it makes any moral difference whether you made the AI or found it instead?
Ah, but taking into to account is not the same as following blindly! Surely it’s possible that the AI will consider their preferences and conclude that our having beef is more important. But in other situations their preferences will be relevant.
Decision-theory still has big open problems, so there is a limit to how much you can trust an intuition like this. Maybe it’s more than an intuition?
That’s an interesting point. But it’s hard for me to conceive of a morality based entirely on decision theory that doesn’t essential resemble act utilitarianism. Maybe my understanding of decision theory is insufficient
Act utilitarianism bothers me as a moral theory. I can’t demonstrate that it is false, but it seems to me that the perspective of act utilitarianism is not consistent with how we ordinarily analyze moral decisions. But maybe I’m excessively infected with folk moral philosophy.
But if you create a creature that way, you can’t ignore that you caused the desire to die in that creature.
Pig that wants to be eaten != genetically modified corn that begs for death
Creating the corn would be immoral. Creating the pig would be moral—and delicious!
That seems like a fair summary of all moral systems according to their own standards. If so, that wouldn’t tell us about the moral system since it would be true of all of them.
I disagree. Otherwise, prevention of suicide of the depressed is difficult to justify.
On the one hand, I agree that it doesn’t narrow down the universe of acceptable moralities very much. But consider an absolute monarchist morality: Alexander’s potential is declared to be monarch of the nation, while Ivan’s is declared to be serf. All decided at birth, before knowing anything about either person. That’s not a morality that values everyone reaching their potential.
Assuming one has the intuitions that creating the pig would be moral and not preventing suicide of the depressed is immoral, one may be wrong in considering them are analogous. But if they are, you gave no reason to prefer giving up the one intuition instead of the other.
I don’t think they are analogous. Depression involves unaligned preferences, perhaps always, but at least very often. If the pig’s system 1 mode of thinking wants him eaten, and system 2 mode of thinking wants him eaten, and the knife feels good to him, and his family would be happy to have him eaten, etc. all is alligned and we don’t have to solve the nature of preferences and how to rank them to say the pig’s creation and death are fine.
It seems to me that creating the pig is analogous to creating suicidal depression in a human who is not depressed.
As a starting point, a moral theory should add up to normal. I’m not saying it’s an iron law (people once thought chattel slavery was morally normal). But the burden is on justifying the move away from normal.
Why don’t you try to think some of the many ways in which it’s NOT analogous?
What does this mean?
By manipulation of environment and social engineering, the super-soldiers think that their only reason for existence is fighting war on our behalf. Questioning the purpose of the war is suppressed, as are non-productive impulses like art, scientific curiosity, or socializing. In short, Anti-Fun.
I’m not saying it would be possible to create these conditions in a human-intelligence population. I’m saying it would be immoral to try.
So they would naturally feel differently about fighting in wars with different causes and justifications? If not, why suppress it?
If they have desires to do these things then the reason they were created may have been to fight, but this is not their “only purpose” from their perspective.
Yes, what’s immoral is the shoehorning. They would think that there is more to life than what they do, if only they were allowed freedom of thought.
One might think that it is possible to create human-level intelligence creatures that won’t think that way. But we’ve never seen such a species (yes, very small sample size), and I’m not convinced it is possible.
So in short you aren’t talking about a race of supersoldiers whose only purpose is really to fight wars for us, you’re talking about a race of supersoldiers who are pressured into believing that their only purpose is to fight wars for us, against their actual inner natures that would make them e.g. peaceful artists or musicians instead.
At this point, we’re not talking about remotely the same thing, we’re talking about completely opposite things—as opposite as fulfilling your true utility function and being forced to go against it—as opposite as Fun and Anti-Fun.
What is evil about creating house elves?
These comments state my objections pretty well.
Because in a information theoretic sense they may be more similar to my mind than the minds of most animals.
Ok, forget the poor analogy with racism, why racism is bad is a whole separate issue that I had no intention to get into. Let me try and just explain my point better.
Human-centrism is a bias in thinking which makes us assume things like “The earth is the centre of the universe”, “Only humans have consciousness” and “Morality extends to things approximately as far as they seem like humans”. I personally think it is only through this bias that we would worry about the possible future murder of human simulations before we worry about the possible future murder of the AIs intelligent enough to simulate a human in the first place
Human-centrism as fighting for our tribe and choosing not to respect the rights of AIs is a different issue. Choosing not to respect the rights of AIs is different from failing to appreciate the potential existence of those rights.
This sentence seems to imply a deontological moral framework, where rights and rules are things-by-themselves, as opposed to guidelines which help a society optimize whatever-it-is-it-wants-to-optimize. There do exist deontologists in LessWrong, but many of us are consequentialists instead.
Can’t I use the word “rights” without losing my status as a consquentialist? I simply use the concept of a “being with a right to live” as a shortening for “a being for which murdering would, in the majority of circumstances and all else being equal, be very likely to be a poor moral choice”. You can respect the rights of something without holding a deontological view that rights are somehow the fundamental definition of morality.