Presumably, anything called a ‘superintelligence’ would recognize the enormous moral difference between killing a human being and killing domesticated animals.
Aside from the problem that higher intelligence doesn’t lead necessarily to convergent moral goals, in this context, I’d hope that a superintelligence didn’t see it that way. Since the main argument for a difference in moral standing between humans and most animals rests on the difference in cognitive capacity, a superintelligence that took that argument seriously would by the same token be able to put its own preferences above humans and claim a moral highground in the process.
I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans’ cognitive capacity? Is there a logical reason for same?
Not to make light of a serious question, but, “Equal rights for bacteria!”? I think not.
Aside: I am puzzled as to the most likely reason Esar’s comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?
I think it would be difficult to construct an ethical system where you give ″no″ consideration to cognitive capacity.
This is likely more a problem of insufficient imagination. For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
The comment was likely downvoted because these issues have been discussed here extensively, and there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
Right you are. I did not express myself well above. Let me try and restate, just for the record.
Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position that a human has more rights than a bacterium), then discriminating based on cognitive capacity (of the species, not the individual) (as one of many possible criteria) is not ipso facto wrong. It may be wrong some of the time, and it may be an approach employed by bigots, but it is not always wrong. This is my present opinion, you understand, not established fact.
there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
Agreed. But this whole business of “we don’t want the superintelligence to burn us with its magnifying glass, so we in turn won’t burn ants with our magnifying glass” strikes me as rather intractable. Even though, of course, it’s essential work.
I would say a few more words, but I think it’s best to stop here. This subthread has cost me 66% of my Karma. :)
Aside from the problem that higher intelligence doesn’t lead necessarily to convergent moral goals, in this context, I’d hope that a superintelligence didn’t see it that way. Since the main argument for a difference in moral standing between humans and most animals rests on the difference in cognitive capacity, a superintelligence that took that argument seriously would by the same token be able to put its own preferences above humans and claim a moral highground in the process.
I think it would be difficult to construct an ethical system where you give no consideration to cognitive capacity. Is there a practical reason for said superintelligence to not take into account humans’ cognitive capacity? Is there a logical reason for same?
Not to make light of a serious question, but, “Equal rights for bacteria!”? I think not.
Aside: I am puzzled as to the most likely reason Esar’s comment was downvoted. Was it perhaps considered insufficiently sophisticated, or implying that its poster was insufficiently well-read, for LW?
This is likely more a problem of insufficient imagination. For example, consider a system that takes seriously the idea of souls. One might very well decide that all that matters is whether an entity has a soul, completely separate from its apparent intelligence level. Similarly, a sufficiently racist individual might assign no moral weight to people of some specific racial group, regardless of their intelligence.
The comment was likely downvoted because these issues have been discussed here extensively, and there’s the additional problem that I pointed out that it wouldn’t even necessarily be in humanity’s best interest for the entity to have such an ethical system.
Right you are. I did not express myself well above. Let me try and restate, just for the record.
Assuming one does not assign equal rights to all autonomous agents (for instance, if we take the position that a human has more rights than a bacterium), then discriminating based on cognitive capacity (of the species, not the individual) (as one of many possible criteria) is not ipso facto wrong. It may be wrong some of the time, and it may be an approach employed by bigots, but it is not always wrong. This is my present opinion, you understand, not established fact.
Agreed. But this whole business of “we don’t want the superintelligence to burn us with its magnifying glass, so we in turn won’t burn ants with our magnifying glass” strikes me as rather intractable. Even though, of course, it’s essential work.
I would say a few more words, but I think it’s best to stop here. This subthread has cost me 66% of my Karma. :)