I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says ‘Kill everything that looks unlike yourself’? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society—to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
You did not answer me on the human question—how we’d like powerful humans to think .
No. But we do want a rule that says something like “the closer things are to being people, the more importance should be given to them”. As a consequence of this rule, I think it should be legal to kill your newborn children.
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you’ve defined people in terms of how closely they realise your ideology:
Extremely young children are lacking basically all of the traits I’d want a “person” to have.
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you—maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity—and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
I’m observably a person.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
Any AI which concluded otherwise is probably already so dangerous that worrying about how my opinions stated here would affect it is probably completely pointless. So… pretty sure.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
Oh, and I’m never encouraging killing your newborns, just arguing that it should be allowed (if done for something other than sadism).
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don’t have a good definition for person, but no reasonable partial definition I can come up with includes babies.
How did I misinterpret? I read that you don’t include babies and I said that I do include babies. That’s (preference) disagreement, not a problem with interpretation.
This line gave me the impression that you thought I was saying I want my definition of “person”, for the moral calculus, to include things like “worthwhile”.Which was not what I was saying -
Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)
If you don’t understand the distinction between “legal” and “encouraged”, we’re going to have a very difficult time communicating.
“Encouraged” is very clearly not absolute but relative here, “somewhat less discouraged than now” can just be written as “encouraged” for brevity’s sake.
I think I must have been unclear, since both you and wedrifid seemed to interpet the wrong thing. What I meant was that I don’t have a good definition for person, but no reasonable partial definition I can come up with includes babies. I didn’t at all mean that just because I would like people to be nice to each other, and so on, I wouldn’t consider people who aren’t nice not to be people. I’d intended to convey this distinction by the quotation marks.
How are you deciding whether your definition is reasonable?
Obviously. There’s a lot of ways any AI can go wrong. But you have to do something. Is your rule “don’t kill humans”? For what definition of human, and isn’t that going to be awfully unfair to aliens? I think “don’t kill people” is probably about as good as you’re going to do.
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative—were I inclined to program this stuff in directly, which I’m not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society—or at least an idealised model of part of it—will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
I don’t want the rule to be “don’t kill people” for whatever values of “kill” and “people” you have in your book. For all I know you’re going to interpet this as something I’d understand more like “don’t eat pineapples”. I want the rule to be “don’t kill people” with your definitions in accordance with mine.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good—with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions—AIs may well grow up with different definitions—and if you’ve got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
If you do think both of the above things, then my task is either to understand why you don’t feel that infanticide should be legal or to point out that perhaps you really would agree that infanticide should be legal if you stopped and seriously considered the proposition for a bit.
I’m not certain whether or not it’s germane to the broader discussion, but “think X is immoral” and “think X should be illegal” are not identical beliefs.
Suppose hypothetically that I think “don’t kill people” is a good broad moral rule, and I think babies are people. It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.
If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think “don’t kill people” is a good law, then all else being equal I should think “don’t kill babies” is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.
It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.
Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it’s arguably better at it. Have I committed a serious moral wrong if I delete its source code?
If I were programming an AI to be a perfect world-guiding moral paragon, I’d rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.
Somewhat late, I must have missed this reply agessss ago when it went up.
there’s a bunch of things in my mind for which the label “person” seems appropriate [...] There’s also a bunch of things for which said label seems inappropriate
That’s not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you’re doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we’re not doing anything particularly logical or reasonable here—we’re not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don’t.
If we try and agree on a common list. Well, you’re agreeing that aliens and powerful AIs go on the list—so biology isn’t the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can—they’re just not particularly good at it yet.
Conversely, what do all your other examples have in common that infants don’t?
Pigs can learn, without a doubt. Even if from this you decide not to kill pigs, the Bayesian spam filter that keeps dozens of viagra ads per day from cluttering up my inbox is also undoubtably learning. Learning, indeed, in much the same way that you or I do, or that pigs do, except that it’s arguably better at it. Have I committed a serious moral wrong if I delete its source code?
Arguably that would be a good heuristic to keep around. I don’t know I’d call it a moral wrong – there’s not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.
[...] odd corner-cases are almost always indicative of ideas which we would not have arrived at ourselves if we weren’t conditioned with them from an early age. I strongly suspect the prohibition on infanticide is such a corner case.
I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn’t have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there’s something emotionally maladjusted in you – by the standards of the needs of society. If we’d not had the precept, and magically appeared out of nowhere, I think we’d have invented it pretty quick.
You think I’m going to try to program an AI in English?
Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.
Yeah, a lack of reply notification’s a real pain in the rear.
It seems to me that this thread of the debate has come down to “Should we consider babies to be people?” There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define ‘people’ in terms of other, broader terms (this being the former case) or by defining ‘people’ via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category ‘babies’ belong.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
I assume you’ve granted that sufficiently advanced AIs ought to be counted as people.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
Am I killing a person if I terminate this script before compilation completes? That is, does “software which will compile and run an AI” belong to the “people” or the “not people” group?
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
Really? It seems to me that someone did invent the taboo[1] on, say, slavery.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
The point I’m trying to make here is that if you started with your current set of rules minus the rule about “don’t rape people” (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about “don’t kill babies”.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
Regardless, I have no doubt that pigs are closer to functioning adult humans than babies are. You’d best give up pork.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
I’d be interested in what standard of “functional” you might propose that newborns would meet, though. Perhaps give examples of things which seem close to to line, on either side? For example, do wolves seem to you like people? Should killing a wolf be considered a moral wrong on par with murder?
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
I have to ask, at this point: have you seriously considered the possibility that babies aren’t people?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
We are supposing that it’s still on the first step, compilation. However, with no interaction on our part, it’s going to finish compiling and begin running the sufficiently-advanced AI. Unless we interrupt it before compilation finishes, in which case it will not.
Not a person.
It is, for example, almost certainly maladaptive to allow all women to go into higher education and industry, because those correlate strongly with having fewer children and that causes serious problems. (Witness Japan circa now.) This is, as you put it, a poor gambling strategy. Does that imply it’s immoral for society to allow women to be educated? Do reasonable people look at people who support women’s rights and wonder what’s wrong with them? Of course not.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
What about fish? I’m pretty sure many fish are significantly more functional than one-month-old humans, possibly up to two or three months. (Younger than that I don’t think babies exhibit the ability to anticipate things. Haven’t actually looked this up anywhere reputable, though.)
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
Frequently. It’s scary. But if I were in a body in which intelligence was not easy to express, and I was killed by someone who didn’t think I was sufficiently functional to be a person, that would be a tragic accident, not a moral wrong.
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
About age four, possibly a year or two earlier. I’m reasonably confident I had introspection at age four; I don’t think I did much before that. I find myself completely unable to empathize with a ‘me’ lacking introspection.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
OK. So the point of this analogy is that newborns seem a lot like the script described, on the compilation step. Yes, they’re going to develop advanced, functioning behaviors eventually, but no, they don’t have them yet. They’re just developing the infrastructure which will eventually support those behaviors.
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
I know the question I actually want to ask: do you think behaviors are immoral if and only if they’re maladaptive?
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
do you get less and less preferential treatment as you become less and less human?
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
Anyway, “species” isn’t a hard-edged category built in to nature—do you get less and less preferential treatment as you become less and less human?
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
Also, what’s the standard against which beings are compared to determine how “human” they are? Phenotypically average among the current population? Nasty prospects for the cryonics advocates among us. And the mind-uploading camp.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
Also veers dangerously close to negative eugenics, if you’re going to start declaring some people are less human than others.
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit:
If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.
I think these are all people, they’re pretty close to babies, and we shouldn’t kill any of them.
The reason they all feel like babies to me, from the perspective of “are they people?”, is that they’re in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.
EDIT: That doesn’t mean we have to pay any cost to follow that path—the value we assign to a person’s life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don’t pay that cost doesn’t mean it’s not a person.
I don’t think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn’t going to talk, learn, or have self-awareness until the year 3000.
Gametes are not people, even though we know how to make people from them. I don’t know why they don’t count.
EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete—they don’t have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.
I’ve never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn’t very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
From these examples, I think “will become a person” is only significant for objects which were people in the past
I know we’re talking about intuitions, but this is one description that can’t jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can’t ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot
at the distinction between them?
This question is fraught with politics and other highly sensitive topics, so I’ll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you’d call one a program in any given context depends on what you’re planning to do with it.
Gametes are not people, even though we know how to make people from them.
I’m not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).
Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She’s effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there’s some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of “person.”
In short, a gamete isn’t a person for the same reason a work order and a handful of plastic pellets aren’t a street-legal automobile.
Figuring out how to define human (as in “don’t kill humans”) so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans—including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way—can reasonably be considered as entities which shouldn’t be killed.
I think you may have taken me to be talking about whether it was acceptable or moral in the sense that society will allow it, that was not my intent. Society allows many unwise, inefficient things and no doubt will do so for some time.
My question was simply whether you thought it wise. If we do make an FAI, and encoded it with some idealised version of our own morality then do we want a rule that says ‘Kill everything that looks unlike yourself’? If we end up on the downside of a vast power gradient with other humans do we want them thinking that everything that has little or no value to them should be for the chopping block?
In a somewhat more pithy form, I guess what I’m asking you is: Given that you cannot be sure you will always be strong enough to have things entirely your way, how sure are you this isn’t going to come back and bite you in the arse?
If it is unwise, then it would make sense to weaken that strand of thought in society—to destroy less out of hand, rather than more. That the strand is already quite strong in society would not alter that.
You did not answer me on the human question—how we’d like powerful humans to think .
This sounds fine as long as you and everything you care about are and always will be included in the group of, ‘people.’ However, by your own admission, (earlier in the discussion to wedrifid,) you’ve defined people in terms of how closely they realise your ideology:
You’ve made it something fluid; a matter of mood and convenience. If I make an AI and tell it to save only ‘people,’ it can go horribly wrong for you—maybe you’re not part of what I mean by ‘people.’ Maybe by people I mean those who believe in some religion or other. Maybe I mean those who are close to a certain processing capacity—and then what happens to those who exceed that capacity? And surely the AI itself would do so....
There are a lot of ways it can go wrong.
You observe yourself to be a person. That’s not necessarily the same thing as being observably a person to someone else operating with different definitions.
The opinion you state may influence what sort of AI you end up with. And at the very least it seems liable to influence the sort of people you end up with.
-shrug- You’re trying to weaken the idea that newborns are people, and are arguing for something that, I suspect, would increase the occurrence of their demise. Call it what you will.
How did I misinterpret? I read that you don’t include babies and I said that I do include babies. That’s (preference) disagreement, not a problem with interpretation.
Intended as a tangential observation about my perceptions of people. (Some of them really are easier for me to model as objects running a machiavellian routine.)
“Encouraged” is very clearly not absolute but relative here, “somewhat less discouraged than now” can just be written as “encouraged” for brevity’s sake.
How are you deciding whether your definition is reasonable?
‘Don’t kill anything that can learn,’ springs to mind as a safer alternative—were I inclined to program this stuff in directly, which I’m not.
I don’t expect us to be explicitly declaring these rules, I expect the moral themes prevalent in our society—or at least an idealised model of part of it—will form much of the seed for the AI’s eventual goals. I know that the moral themes prevalent in our society form much of the seed for the eventual goals of people.
In either case, I don’t expect us to be in-charge. Which makes me kinda concerned when people talk about how we should be fine with going around offing the lesser life-forms.
Yet my definitions are not in accordance with yours. And, if I apply the rule that I can kill everything that’s not a person, you’re not going to get the results you desire.
It’d be great if I could just say ‘I want you to do good—with your definition of good in accordance with mine.’ But it’s not that simple. People grow up with different definitions—AIs may well grow up with different definitions—and if you’ve got some rule operating over a fuzzy boundary like that, you may end up as paperclips, or dogmeat or something horrible.
I’m not certain whether or not it’s germane to the broader discussion, but “think X is immoral” and “think X should be illegal” are not identical beliefs.
I was with you, until your summary.
Suppose hypothetically that I think “don’t kill people” is a good broad moral rule, and I think babies are people.
It seems to follow from what you said that I therefore ought to agree that infanticide should be legal.
If that is what you meant to say, then I am deeply confused. If (hypothetically) I think babies are people, and if (hypothetically) I think “don’t kill people” is a good law, then all else being equal I should think “don’t kill babies” is a good law. That is, I should believe that infanticide ought not be any more legal than murder in general.
It seems like one of us dropped a negative sign somewhere along the line. Perhaps it was me, but if so, I seem incapable of finding it again.
Oh good! I don’t usually nitpick about such things, but you had me genuinely puzzled.
If I were programming an AI to be a perfect world-guiding moral paragon, I’d rather have it keep the spam filter in storage (the equivalent of a retirement home, or cryostasis) than delete it for the crime of obsolescence. Digital storage space is cheap, and getting cheaper all the time.
Somewhat late, I must have missed this reply agessss ago when it went up.
That’s not a reasoned way to form definitions that have any more validity as referents than lists of what you approve of. What you’re doing is referencing your feelings and seeing what the objects of those feelings have in common. It so happens that I feel that infants are people. But we’re not doing anything particularly logical or reasonable here—we’re not drawing our boundaries using different tools. One of us just thinks they belong on the list and the other thinks they don’t.
If we try and agree on a common list. Well, you’re agreeing that aliens and powerful AIs go on the list—so biology isn’t the primary concern. If we try to draw a line through the commonalities what are we going to get? All of them seem able to gather, store, process and apply information to some ends. Even infants can—they’re just not particularly good at it yet.
Conversely, what do all your other examples have in common that infants don’t?
Arguably that would be a good heuristic to keep around. I don’t know I’d call it a moral wrong – there’s not much reason to talk about morals when we can just say discouraged in society and have everyone on the same page. But you would probably do well to have a reluctance to destroy it. One day someone vastly more complex than you may well look on you in the same light you look on your spam filter.
I strongly suspect that societies where people had no reluctance to go around offing their infants wouldn’t have lasted very long. Infants are significant investments of time and resources. Offing your infants is a sign that there’s something emotionally maladjusted in you – by the standards of the needs of society. If we’d not had the precept, and magically appeared out of nowhere, I think we’d have invented it pretty quick.
Not really about you specifically. But, in general – yeah, more or less. Maybe not write the source code, but instruct it. English, or uploads or some other incredibly high-level language with a lot of horrible dependencies built into its libraries (or concepts or what have you) that the person using it barely understands themselves. Why? Because it will be quicker. The guy who just tells the AI to guess what he means by good skips the step of having to calculate it herself.
Yeah, a lack of reply notification’s a real pain in the rear.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
Not a person.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
http://news.bbc.co.uk/1/hi/england/west_yorkshire/3189941.stm
Says that scientists observed some of them using tools, and that definitely seems like people though.
Yes.
Shared attention, recognition, prediction, bonding -
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
Am I the only who bit the speciesist bullet?
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
Consider this set:
A sleeping man. A cryonics patient. A nonverbal 3-year-old. A drunk, passed out.
I think these are all people, they’re pretty close to babies, and we shouldn’t kill any of them.
The reason they all feel like babies to me, from the perspective of “are they people?”, is that they’re in a condition where we can see a reasonable path for turning them into something that is unquestionably a person.
EDIT: That doesn’t mean we have to pay any cost to follow that path—the value we assign to a person’s life can be high but must be finite, and sometimes the correct, moral decision is to not pay that price. But just because we don’t pay that cost doesn’t mean it’s not a person.
I don’t think the time frame matters, either. If I found Fry from Futurama in the cryostasis tube today, and I killed him because I hated him, that would be murder even though he isn’t going to talk, learn, or have self-awareness until the year 3000.
Gametes are not people, even though we know how to make people from them. I don’t know why they don’t count.
EDIT: oh shit, better explain myself about that last one. What I mean is that it is not possible to murder a gamete—they don’t have the moral weight of personhood. You can, potentially, in some situations, murder a baby (and even a fetus): that is possible to do, because they count as people.
I’ve never seen a compiling AI, let alone an interrupted one, even in fiction, so your example isn’t very available to me. I can imagine conditions that would make it OK or not OK to cancel the compilation process.
This is most interesting to me:
I know we’re talking about intuitions, but this is one description that can’t jump from the map into the territory. We know that the past is completely screened off by the present, so our decisions, including moral decisions, can’t ultimately depend on it. Ultimately, there has to be something about the present or future states of these humans that makes it OK to kill the baby but not the guy in the coma. Could you take another shot at the distinction between them?
This question is fraught with politics and other highly sensitive topics, so I’ll try to avoid getting too specific, but it seems to me that thinking of this sort of thing purely in terms of a potentiality relation rather misses the point. A self-extracting binary, a .torrent file, a million lines of uncompiled source code, and a design document are all, in different ways, potential programs, but they differ from each other both in degree and in type of potentiality. Whether you’d call one a program in any given context depends on what you’re planning to do with it.
I’m not at all sure a randomly selected human gamete is less likely to become a person than a randomly selected cryonics patient (at least, with currently-existing technology).
Might be better to talk about this in terms of conversion cost rather than probability. To turn a gamete into a person you need another gamete, $X worth of miscellaneous raw materials (including, but certainly not limited to, food), and a healthy female of childbearing age. She’s effectively removed from the workforce for a predictable period of time, reducing her probable lifetime earning potential by $Y, and has some chance of various medical complications, which can be mitigated by modern treatments costing $Z but even then works out to some number of QALYs in reduced life expectancy. Finally, there’s some chance of the process failing and producing an undersized corpse, or a living creature which does not adequately fulfill the definition of “person.”
In short, a gamete isn’t a person for the same reason a work order and a handful of plastic pellets aren’t a street-legal automobile.
What’s the cutoff probability?
You are right; retracted.
Figuring out how to define human (as in “don’t kill humans”) so as to include babies is relatively easy, since babies are extremely likely to grow up into humans.
The hard question is deciding which transhumans—including types not yet invented, possibly types not yet thought of, and certainly types which are only imagined in a sketchy abstract way—can reasonably be considered as entities which shouldn’t be killed.