Yeah, a lack of reply notification’s a real pain in the rear.
It seems to me that this thread of the debate has come down to “Should we consider babies to be people?” There are, broadly, two ways of settling this question: moving up the ladder of abstraction, or moving down. That is, we can answer this by attempting to define ‘people’ in terms of other, broader terms (this being the former case) or by defining ‘people’ via the listing of examples of things which we all agree are or are not people and then trying to decide by inspection in which category ‘babies’ belong.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
This is entirely orthogonal to the point I was trying to make. Keep in mind, most societies invented misogyny pretty quick too. Rather, I doubt that you personally, raised in a society much like this one except without the taboo on killing infants, would have come to the conclusion that killing infants is a moral wrong.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
I can come up with a rough definition, but rough definitions fail in exactly those cases where there is potential disagreement.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
So will we rather say that we include mentally disabled humans above a certain level of functioning? The problem then is that babies almost certainly fall well below that threshold, wherever you might set it.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
I assume you’ve granted that sufficiently advanced AIs ought to be counted as people.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
Am I killing a person if I terminate this script before compilation completes? That is, does “software which will compile and run an AI” belong to the “people” or the “not people” group?
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
Really? It seems to me that someone did invent the taboo[1] on, say, slavery.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
The point I’m trying to make here is that if you started with your current set of rules minus the rule about “don’t rape people” (not to say your hypothetical morals view it as acceptable, merely undecided), I think you could quite naturally come to conclude that rape was wrong. But it seems to me that this would not be the case if instead you left out the rule about “don’t kill babies”.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
Regardless, I have no doubt that pigs are closer to functioning adult humans than babies are. You’d best give up pork.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
I’d be interested in what standard of “functional” you might propose that newborns would meet, though. Perhaps give examples of things which seem close to to line, on either side? For example, do wolves seem to you like people? Should killing a wolf be considered a moral wrong on par with murder?
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
I have to ask, at this point: have you seriously considered the possibility that babies aren’t people?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
We are supposing that it’s still on the first step, compilation. However, with no interaction on our part, it’s going to finish compiling and begin running the sufficiently-advanced AI. Unless we interrupt it before compilation finishes, in which case it will not.
Not a person.
It is, for example, almost certainly maladaptive to allow all women to go into higher education and industry, because those correlate strongly with having fewer children and that causes serious problems. (Witness Japan circa now.) This is, as you put it, a poor gambling strategy. Does that imply it’s immoral for society to allow women to be educated? Do reasonable people look at people who support women’s rights and wonder what’s wrong with them? Of course not.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
What about fish? I’m pretty sure many fish are significantly more functional than one-month-old humans, possibly up to two or three months. (Younger than that I don’t think babies exhibit the ability to anticipate things. Haven’t actually looked this up anywhere reputable, though.)
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
Frequently. It’s scary. But if I were in a body in which intelligence was not easy to express, and I was killed by someone who didn’t think I was sufficiently functional to be a person, that would be a tragic accident, not a moral wrong.
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
About age four, possibly a year or two earlier. I’m reasonably confident I had introspection at age four; I don’t think I did much before that. I find myself completely unable to empathize with a ‘me’ lacking introspection.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
OK. So the point of this analogy is that newborns seem a lot like the script described, on the compilation step. Yes, they’re going to develop advanced, functioning behaviors eventually, but no, they don’t have them yet. They’re just developing the infrastructure which will eventually support those behaviors.
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
I know the question I actually want to ask: do you think behaviors are immoral if and only if they’re maladaptive?
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
do you get less and less preferential treatment as you become less and less human?
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
Anyway, “species” isn’t a hard-edged category built in to nature—do you get less and less preferential treatment as you become less and less human?
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
Also, what’s the standard against which beings are compared to determine how “human” they are? Phenotypically average among the current population? Nasty prospects for the cryonics advocates among us. And the mind-uploading camp.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
Also veers dangerously close to negative eugenics, if you’re going to start declaring some people are less human than others.
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral.
Disagree? What do you mean by this?
Edit:
If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.
Yeah, a lack of reply notification’s a real pain in the rear.
Edit: You can skip to the next break line if you’re not interested in reading about the methodological component so much as you are continuing the infants argument.
What we’re doing here, ideally, is pattern matching. I present you with a pattern and part of that pattern is what I’m talking about. I present you with another pattern where some things have changed and the parts of the pattern I want to talk about are the same in that one. And I suppose to be strict we’d have to present you with patterns that are fairly similar and express disapproval for those.
Because we have a large set of existing patterns that we both know about—properties—it’s a lot quicker to make reference to some of those patterns than it is to continue to flesh out our lists to play guess the commonality. We can still do it both ways, as long as we can still head back down the abstraction pile fairly quickly. Compressing the search space by abstract reference to elements of patterns that members of the set share, is not the same thing as starting off with a word alone and then trying to decide on the pattern and then fit the members to that set.
If you cannot do that exercise, if you cannot explicitly declare at least some of the commonalities you’re talking about, then it leads me to believe that your definition is incoherent. The odds that, with our vast set of shared patterns—with our language that allows us to do this compression—that you can’t come up with at least a fairly rough definition fairly quickly seem remote.
If I wanted to define humans for instance—“Most numerous group of bipedal tools users on Earth.” That was a lot quicker than having to define humans by providing examples of different creatures. We can only think the way we do because we have these little compression tricks that let us leap around the search space, abstraction doesn’t have to lead to more confusion—as long as your terms refer to things that people have experience with.
Whereas if I provided you a selection of human genetic structures—while my terms would refer exactly, while I’d even be able to stick you in front of a machine and point to it directly—would you even recognise it without going to a computer? I wouldn’t. The reference falls beyond the level of my experience.
I don’t see why you think my definition needs to be complete. We have very few exact definitions for anything; I couldn’t exactly define what I mean by human. Even by reference to genetic structure I’ve no idea where it would make sense to set the deviation from any specific example that makes you human or not human.
But let’s go with your approach:
It seems to me that mentally disabled people belong on the people list. And babies seem more similar to mentally disabled people than they do to pigs and stones.
Well, no, but you could make that argument about anything. I raised in a society just like this one but without taboo X would never create taboo X on my own, taboos are created by their effects on society. It’s the fact that society would not have been like this one without taboo X that makes it taboo in the first place.
Eh, functioning is a very rough definition and we’ve got to that pretty quickly.
Well, the question is whether food animals fall beneath the level of babies. If they do, then I can keep eating them happily enough; if they don’t, I’ve got the dilemma as to whether to stop eating animals or start eating babies.
And it’s not clear to me, without knowing what you mean by functioning, that pigs or cows are more intelligent than babies. I’ve not seen one do anything like that. Predatory animals—wolves and the like, on the other tentacle, are obviously more intelligent than a baby.
As to how I’d resolve the dilemma if it did occur, I’m leaning more towards stopping eating food animals than starting to eat babies. Despite the fact that food animals are really tasty, I don’t want to put a precedent in place that might get me eaten at some point.
By fiat—sufficiently advanced for what? But I suppose I’ll grant any AI that can pass the Turing test qualifies, yes.
That depends on the nature of the script. If it’s just performing some relatively simple task over and over, then I’m inclined to agree that it belongs in the not people group. If it is itself as smart as, say, a wolf, then I’m inclined to think it belongs in the people group.
I suppose, what I really mean to say is they’re taboos because that taboo has some desirable effect on society.
It seems to me that babies are quite valuable, and became so as their survival probability went up. In the olden days infanticide was relatively common—as was death in childbirth. People had a far more casual attitude towards the whole thing.
But as the survival probability went up the investment people made, and were expected to make, in individual children went up—and when that happened infanticide became a sign of maladaptive behaviour.
Though I doubt they’d have put it in these terms: People recognised a poor gambling strategy and wondered what was wrong with the person.
And I think it would be the same in any advanced society.
I suppose I had, yes. It never really occurred to me that they might be that intelligent—but, yeah, having done a bit of reading they seem smart enough that I probably oughtn’t to eat them.
Wolves definitely seem like people to me, yes. Adult humans are definitely on the list and wolves do pack behaviours which are very human-like. Killing a wolf for no good reason should be considered a moral wrong on par with murder. There’s not to say that I think it should result in legal punishment on par with killing a human, mind, it’s easier to work out that humans are people than it is to work out that wolves are—it’s a reasonable mistake.
Insects like wasps and flies don’t seem like people. Red pandas do. Dolphins do. Cows… don’t. But given what I’ve discovered about pigs that bears some checking—and now cows do. Hnn. Damn it, now I won’t be able to look at burgers without feeling sad.
All the videos with loads of blood and the like never bothered me, but learning that food-animals are that intelligent really does.
Have you imagined what life would be like if you were stupider, or were more intelligent but denied a body with which that intelligence was easy to express? If your person-hood is fundamental to your identity, then as long as you can imagine being stupider and still being you that still qualifies as a person. In terms of how old a person would be to have the sort of capabilities the person you’re imaging would have, at what point does your ability to empathise with the imaginary-you break down?
As far as I know how, yes. If you’ve got some ways of thinking that we haven’t been talking about here, feel free to post them and I’ll do my best to run them.
If Babies weren’t people the world would be less horrifying. Just as if food-animals are people the world is more horrifying. But it would look the same in terms of behaviours—people kill people all the time, I don’t expect them not to without other criteria being involved.
Not a person.
No, because we’ve had that discussion. But people did and that attitude towards women was especially prevalent in Japan, where it was among the most maladaptive for the contrary to hold, until quite recently. Back in the 70s and 80s the idea for women was basically to get a good education and marry the person their family picked for them. Even today people who say they don’t want children or a relationship are looked on as rather weird and much of the power there, in practice, works in terms of family relationships.
It just so happens there are lots of adaptive reasons to have precedents that seem to extend to cover women too. I don’t think one can seriously forward an argument that keeps women at home and doesn’t create something that can be used against him in fairly horrifying ways. Even if you don’t have a fairly inclusive definition of people, it seems unwise to treat other humans in that way—you, after all, are the other human to another human.
I don’t know enough about them—given they’re so different to us in terms of gross biology I imagine it’s often going to be quite difficult to distinguish between functioning and instinct—this:
http://news.bbc.co.uk/1/hi/england/west_yorkshire/3189941.stm
Says that scientists observed some of them using tools, and that definitely seems like people though.
Yes.
Shared attention, recognition, prediction, bonding -
The legal definition of an accident is an unforeseeable event. I don’t agree with that entirely because, well everything’s foreseeable to an arbitrary degree of probability given the right assumptions. However, do you think that people have a duty to avoid accidents that they foresee a high probability-adjusted harm from? (i.e. the potential harm modified by the probability they foresee of the event.)
The thought here being that, if there’s much room for doubt, there’s so much suffering involved in killing and eating animals that we shouldn’t do it even if we only argue ourselves to some low probability of their being people.
Do you think that the use of language and play to portray and discuss fantasy worlds is a sign of introspection?
I agree, if it doesn’t have the capabilities that will make it a person there’s no harm in stopping it before it gets there. If you prevent an egg and a sperm combining and implanting, you haven’t killed a human.
No, fitness is too complex a phenomena for our relatively inefficient ways of thinking and feeling to update on it very well. If we fix immediate lethal response from the majority as one end of the moral spectrum, and enthusiastic endorsement as the other, then maladaptive behaviour tends to move you further towards the lethal response end of things. But we’re not rational fitness maximisers, we just tend that way on the more readily apparent issues.
Am I the only who bit the speciesist bullet?
It doesn’t matter if a pig is smarter than a baby. It wouldn’t matter if a pig passed the Turing test. Babies are humans, so they get preferential treatment.
I’d say so, yeah. It’s kind of a tricky function, though, since there are two reasons I’m logically willing to give preferential treatment to an organism: likelyhood of said organism eventually becoming the ancestor of a creature similar to myself, and likelyhood of that creature or it’s descendants contributing to an environment in which creatures similar to myself would thrive.
It’s a lot more hard-edged than intelligence. Of all the animals (I’m talking about individual animals, not species) in the world, practically all are really close to 0% or 100% human. On the other hand, there is a broad range of intelligence among animals, and even in humans. So if you want a standard that draws a clean line, humanity is better than intelligence.
I can tell the difference between an uploaded/frozen human, and a pig. Even a uploaded/frozen pig. Transhumans are in the preferential treatment category, but transpigs aren’t..
This is a fully general counter-argument. Any standard of moral worth will have certain objects that meet the standard and certain objects that fail. If you say “All objects that have X property have moral worth”, I can immediately accuse you of eugenics against objects that do not have X property.
And a question for you :If you think that more intelligence equals more moral worth, does that mean that AI superintelligences have super moral worth? If Clippy existed, would you try and maximize the number of paperclips in order to satisfy the wants a superior intelligence?
I really like your point about the distinction between maladaptive behavior and immoral behavior. But I don’t think your example about women in higher education is as cut and dried as you present it.
For those who think that morality is the godshatter of evolution, maladaptive is practically the definition of immoral. For me, maladaptive-ness is the explanation for why certain possible moral memes (insert society-wide incest-marriage example) don’t exist in recorded history, even though I should otherwise expect them to exist given my belief in moral anti-realism.
Disagree? What do you mean by this?
Edit: If I believe that morality, either descriptively or prescriptively, consists of the values imparted to humans by the evolutionary process, I have no need to adhere to the process roughly used to select these values rather than the values themselves when they are maladaptive.
If one is committed to a theory that says morality is objective (aka moral realism), one needs to point at what it is that make morality objectively true. Obvious candidates include God and the laws of physics. But those two candidates have been disproved by the empiricism (aka the scientific method).
At this point, some detritus of evolution starts to look like a good candidate for the source of morality. There isn’t an Evolution Fairy who commanded the humans evolve to be moral, but evolution has created drives and preferences within us all (like hunger or desire for sex). More on this point here—the source of my reference to godshatter.
It might be that there is an optimal way of bringing these various drives into balance, and the correct choices to all moral decisions can be derived from this optimal path. As far as I can tell, those who are trying to derive morality from evo. psych endorse this position.
In short, if morality is the product of human drives created by evolution, then behavior that is maladaptive (i.e. counter to what is selected for by evolution) is by essentially correlated with immoral behavior.
That said, my summary of the position may be a bit thin, because I’m a moral anti-realist and don’t believe the evo. psych → morality story.
Ah, I see what you mean. I don’t think one has to believe in objective morality as such to agree that “morality is the godshatter of evolution”. Moreover, I think it’s pretty key to the “godshatter” notion that our values have diverged from evolution’s “value”, and we now value things “for their own sake” rather than for their benefit to fitness. As such, I would say that the “godshatter” notion opposes the idea that “maladaptive is practically the definition of immoral”, even if there is something of a correlation between evolutionarily-selectable adaptive ideas and morality.