If you destroy mankind you might someday encounter an alien super-intelligence that will lack trust in you in part because you destroyed mankind.
By the same argument, we might someday encounter an alien superintelligence that will lack trust in us in part because we domesticate animals (breeding them to not fear us) and then kill and eat them.
By the same argument, we might someday encounter an alien superintelligence that will lack trust in us in part because we domesticate animals (breeding them to not fear us) and then kill and eat them.
That’s a pretty decent argument for vegetarianism. One person’s reducio ad absurdum is another person’s modus ponens.
Careful: Some alien intelligence may also hate us for not killing enough animals. e.g. How cruel must we be not to wipe out carnivores so that herbivores can live life in peace?
Or as I recently said in a different forum, might think us evil that we aren’t exterminating all animal life that we can find… The moral syllogism for that is quite simple.
Except, how could such a set of preferences have evolved? How would that behavior ever be adaptive?
Most every human preference is adaptive in some sort of ancestral context. We can, at least, assume that alien preferences are adaptive as well (given that the aliens appear via evolutionary processes, rather than some other way)
Moral considerations need not be directly adaptive; you can probably get there from routes as simple as empathy + deductive reasoning. If humanity hasn’t come to that collective conclusion yet, despite having the hardware, I suspect it’s because such an omnicidal conclusion hasn’t been in any major group’s interests yet.
But you are right of course...vegetarianism is a good example of a conclusion reached via empathy + deductive reasoning which is in no way adaptive to the vegetarian (though you might argue that the vegetarian shares many alleles with the animal).
However: a maladaptive morality would never be hardwired into a species. A human might think and ponder, and eventually come to take a maladaptive moral stance...but not all humans would be inherently predisposed to that stance. If they were, natural selection would quickly remove it.
So some of our aliens might hate us for not killing animals...but it would be very unlikely if this was a universal moral among that alien species.
Well, I’d be inclined to agree that the prior probability of some civilization adopting this is low [1], but I can’t agree with what seems to be your implicit assumption that a non-predispositive attitude can’t be widespread—partially because group inteterests are defined much more widely than adaptiveness.
[1] I’d probably extend that to anything other than “don’t lie or break your promises,” “play tit for tat,” “do what the ruling power says,” or “maximize utility,” and even those I wouldn’t say are anything like sure bets.
Hmm...actually, the implicit assumption I was making was that aliens would forgive another species for adopting norms that they considered non-predispositive.
A Western human would not forgive another culture for torturing sentient beings, for example...but they would forgive another culture for polyamory/polygamy/polygyny. A human can make the distinction between morality which is instinctive and morality which is culturally constructed, and the latter can be compromised in certain contexts.
But you are right, bad implicit assumption. Aliens might not make that distinction.
When I was a child, I refused to kill animals just for fun because I wouldn’t want a superhuman alien to kill me just for fun—and I mostly still do. (Of course I hadn’t heard of TDT as proposed by EY, but I had heard of the Golden Rule, which was close enough.)
Presumably, anything called a ‘superintelligence’ would recognize the enormous moral difference between killing a human being and killing domesticated animals.
Presumably, anything called a ‘superintelligence’ would recognize the enormous moral difference between killing a human being and killing domesticated animals.
Aside from the problem that higher intelligence doesn’t lead necessarily to convergent moral goals, in this context, I’d hope that a superintelligence didn’t see it that way. Since the main argument for a difference in moral standing between humans and most animals rests on the difference in cognitive capacity, a superintelligence that took that argument seriously would by the same token be able to put its own preferences above humans and claim a moral highground in the process.
I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans’ cognitive capacity? Is there a logical reason for same?
Not to make light of a serious question, but, “Equal rights for bacteria!”? I think not.
Aside: I am puzzled as to the most likely reason Esar’s comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?
I think it would be difficult to construct an ethical system where you give ″no″ consideration to cognitive capacity.
This is likely more a problem of insufficient imagination. For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
The comment was likely downvoted because these issues have been discussed here extensively, and there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
Right you are. I did not express myself well above. Let me try and restate, just for the record.
Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position that a human has more rights than a bacterium), then discriminating based on cognitive capacity (of the species, not the individual) (as one of many possible criteria) is not ipso facto wrong. It may be wrong some of the time, and it may be an approach employed by bigots, but it is not always wrong. This is my present opinion, you understand, not established fact.
there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
Agreed. But this whole business of “we don’t want the superintelligence to burn us with its magnifying glass, so we in turn won’t burn ants with our magnifying glass” strikes me as rather intractable. Even though, of course, it’s essential work.
I would say a few more words, but I think it’s best to stop here. This subthread has cost me 66% of my Karma. :)
By the same argument, we might someday encounter an alien superintelligence that will lack trust in us in part because we domesticate animals (breeding them to not fear us) and then kill and eat them.
That’s a pretty decent argument for vegetarianism. One person’s reducio ad absurdum is another person’s modus ponens.
Careful: Some alien intelligence may also hate us for not killing enough animals. e.g. How cruel must we be not to wipe out carnivores so that herbivores can live life in peace?
Or as I recently said in a different forum, might think us evil that we aren’t exterminating all animal life that we can find… The moral syllogism for that is quite simple.
Alien? Never mind alien. Your aliens are insufficiently alien.
I would make that exact argument. Sure, we need the biosphere for now, but let’s get rid of it as soon as possible.
Except, how could such a set of preferences have evolved? How would that behavior ever be adaptive?
Most every human preference is adaptive in some sort of ancestral context. We can, at least, assume that alien preferences are adaptive as well (given that the aliens appear via evolutionary processes, rather than some other way)
Moral considerations need not be directly adaptive; you can probably get there from routes as simple as empathy + deductive reasoning. If humanity hasn’t come to that collective conclusion yet, despite having the hardware, I suspect it’s because such an omnicidal conclusion hasn’t been in any major group’s interests yet.
Being in a group’s interest == adaptive, no?
But you are right of course...vegetarianism is a good example of a conclusion reached via empathy + deductive reasoning which is in no way adaptive to the vegetarian (though you might argue that the vegetarian shares many alleles with the animal).
However: a maladaptive morality would never be hardwired into a species. A human might think and ponder, and eventually come to take a maladaptive moral stance...but not all humans would be inherently predisposed to that stance. If they were, natural selection would quickly remove it.
So some of our aliens might hate us for not killing animals...but it would be very unlikely if this was a universal moral among that alien species.
Well, I’d be inclined to agree that the prior probability of some civilization adopting this is low [1], but I can’t agree with what seems to be your implicit assumption that a non-predispositive attitude can’t be widespread—partially because group inteterests are defined much more widely than adaptiveness.
[1] I’d probably extend that to anything other than “don’t lie or break your promises,” “play tit for tat,” “do what the ruling power says,” or “maximize utility,” and even those I wouldn’t say are anything like sure bets.
Hmm...actually, the implicit assumption I was making was that aliens would forgive another species for adopting norms that they considered non-predispositive.
A Western human would not forgive another culture for torturing sentient beings, for example...but they would forgive another culture for polyamory/polygamy/polygyny. A human can make the distinction between morality which is instinctive and morality which is culturally constructed, and the latter can be compromised in certain contexts.
But you are right, bad implicit assumption. Aliens might not make that distinction.
That’s me, the Plant Avenger! A steak every chance I get.
In fact, this behavior is so dreadful that the revenge-killing of humans would send a trust signal.
When I was a child, I refused to kill animals just for fun because I wouldn’t want a superhuman alien to kill me just for fun—and I mostly still do. (Of course I hadn’t heard of TDT as proposed by EY, but I had heard of the Golden Rule, which was close enough.)
Presumably, anything called a ‘superintelligence’ would recognize the enormous moral difference between killing a human being and killing domesticated animals.
Aside from the problem that higher intelligence doesn’t lead necessarily to convergent moral goals, in this context, I’d hope that a superintelligence didn’t see it that way. Since the main argument for a difference in moral standing between humans and most animals rests on the difference in cognitive capacity, a superintelligence that took that argument seriously would by the same token be able to put its own preferences above humans and claim a moral highground in the process.
I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans’ cognitive capacity? Is there a logical reason for same?
Not to make light of a serious question, but, “Equal rights for bacteria!”? I think not.
Aside: I am puzzled as to the most likely reason Esar’s comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?
This is likely more a problem of insufficient imagination. For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
The comment was likely downvoted because these issues have been discussed here extensively, and there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
Right you are. I did not express myself well above. Let me try and restate, just for the record.
Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position that a human has more rights than a bacterium), then discriminating based on cognitive capacity (of the species, not the individual) (as one of many possible criteria) is not ipso facto wrong. It may be wrong some of the time, and it may be an approach employed by bigots, but it is not always wrong. This is my present opinion, you understand, not established fact.
Agreed. But this whole business of “we don’t want the superintelligence to burn us with its magnifying glass, so we in turn won’t burn ants with our magnifying glass” strikes me as rather intractable. Even though, of course, it’s essential work.
I would say a few more words, but I think it’s best to stop here. This subthread has cost me 66% of my Karma. :)