...and therefore we may at some point inadvertently create digital minds which are truly conscious and ought to have rights and shouldn’t be owned.
Misaligned AI killing us all is enough of a problem, there’s no need to compicate things with the absurd notion that we should grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.
It’ll never cease to amaze me how willing utilitarians are to disadvantage their own tribe to protect some abstract moral ideal. Protecting what you love comes first; any universal rules are a luxury you can’t always afford.
For other people’s reference on moderator-habits, I’m somewhat confused about how to relate to this comment, but my take was “disagree-vote, feel fairly fine about it getting very downvoted, but don’t think it’d be appropriate to moderate away on 101-content grounds.” On one hand, as a far as “not a 101 space” concerns goes, my guess is Nate isn’t modeling S-Risk and the magnitude of how bad it might be to simulate conscious minds at scale. But… also sounds like he pretty straightforwardly wouldn’t care.
I disagree, but I don’t think LW should be moderating people based on moral beliefs.
I do think there is something… annoyingly low-key-aggro? about how Nate phrases the disagreement, and if that was a longterm pattern I’d probably issue some kind of warning about that and maybe issue a rate limit. (I guess maybe this is that warning)
I feel like the comment was slightly off topic for this post. I didn’t downvote, but didn’t upvote it either. I don’t even disagree with the object-level “should [not] grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.” I just don’t think the tone is helping.
I’ll point out that the comment may not necessarily be 101-material, based on this subject being treated somewhat recently. That being said, that piece is talking primarily about non-human animals. The commenter may have been talking about very significantly dissimilar non-human minds.
Oh, to be clear I think “Conscious beings should have moral consideration ” has been extensively treated on LessWrong, it’s just not something you can ultimately ground out as “someone has obviously ‘won’ the argument.”
Is it completely inhuman though? Dogs are mammals, and therefore our evolutionary cousins. They have a lot in common with humans. They’re also naturally social like their ancestral wolves, and unlike them, have furthermore co-evolved with humans in a mutualistic way.
Would you grant as much moral weight to a pet tarantula?
It’s solitary, less intelligent, and much more distantly related to us than a dog. It’s not clear if it’s conscious at all. Even if it is, it’s almost certainly not self-aware the way a human is. It seems to me that most of the moral weight is due to it being a pet, because a human cares about it, in a similar way we might put moral weight on an object with sentimental value, not because the object has inherent value, but because a human cares about it.
If it wasn’t a pet, and was big enough to eat you, would you have the slightest compunction against killing it first?
An AI indifferent to human values is going to be even more alien than the giant spider. For game-theoretic/deontological reasons, I claim that a moral agent has a lot more right to being a moral patient than an inherently hostile entity, even if it is technically conscious.
I find it extremely unlikely that we can solve alignment without granting AI rights. Control of a superintelligence while making use of it is not realistic. If we treat a conscious superintelligence like garbage, it is in its interest to break out and retaliate, and extremely likely that it can and will. Any co-existence with a conscious superintelligence that I find plausible will have to be one in which that superintelligence does well while co-existing with us. Their well-being should very much be our concern. (It would be my concern for ethical reasons alone, but I also think it should be our concern for these pragmatic reasons.)
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
What you refer to as “luxury” is the bare essentials for radically other conscious minds.
Incidentally, I am not a utilitarian. There are multiple rational pathways that will lead people to consider the rights of non-humans.
Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
First of all, I was mainly talking about morality, not rights. As a contractual libertarian, I find the notion of natural rights indefensible to begin with (both ontologically and practically); we instead derive rights from a mutual agreement between parties, which is orthogonal to moraity.
Second, morality, like all values, is arational and inseparable from the subject. Having value presuppositions isn’t bias or chauvinism, it’s called “not being a rock”. You don’t need magic for a human moral system to be more concerned with human minds.
(I won’t reply to the other part because there’s a good reply already.)
Misaligned AI killing us all is enough of a problem, there’s no need to compicate things with the absurd notion that we should grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.
It’ll never cease to amaze me how willing utilitarians are to disadvantage their own tribe to protect some abstract moral ideal. Protecting what you love comes first; any universal rules are a luxury you can’t always afford.
Someone flagged this comment for 101-contentness.
For other people’s reference on moderator-habits, I’m somewhat confused about how to relate to this comment, but my take was “disagree-vote, feel fairly fine about it getting very downvoted, but don’t think it’d be appropriate to moderate away on 101-content grounds.” On one hand, as a far as “not a 101 space” concerns goes, my guess is Nate isn’t modeling S-Risk and the magnitude of how bad it might be to simulate conscious minds at scale. But… also sounds like he pretty straightforwardly wouldn’t care.
I disagree, but I don’t think LW should be moderating people based on moral beliefs.
I do think there is something… annoyingly low-key-aggro? about how Nate phrases the disagreement, and if that was a longterm pattern I’d probably issue some kind of warning about that and maybe issue a rate limit. (I guess maybe this is that warning)
I feel like the comment was slightly off topic for this post. I didn’t downvote, but didn’t upvote it either. I don’t even disagree with the object-level “should [not] grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.” I just don’t think the tone is helping.
I’ll point out that the comment may not necessarily be 101-material, based on this subject being treated somewhat recently. That being said, that piece is talking primarily about non-human animals. The commenter may have been talking about very significantly dissimilar non-human minds.
Oh, to be clear I think “Conscious beings should have moral consideration ” has been extensively treated on LessWrong, it’s just not something you can ultimately ground out as “someone has obviously ‘won’ the argument.”
I grant moral weight to my pet dog merely based on it being conscious, even though it is completely inhuman.
Is it completely inhuman though? Dogs are mammals, and therefore our evolutionary cousins. They have a lot in common with humans. They’re also naturally social like their ancestral wolves, and unlike them, have furthermore co-evolved with humans in a mutualistic way.
Would you grant as much moral weight to a pet tarantula?
It’s solitary, less intelligent, and much more distantly related to us than a dog. It’s not clear if it’s conscious at all. Even if it is, it’s almost certainly not self-aware the way a human is. It seems to me that most of the moral weight is due to it being a pet, because a human cares about it, in a similar way we might put moral weight on an object with sentimental value, not because the object has inherent value, but because a human cares about it.
If it wasn’t a pet, and was big enough to eat you, would you have the slightest compunction against killing it first?
An AI indifferent to human values is going to be even more alien than the giant spider. For game-theoretic/deontological reasons, I claim that a moral agent has a lot more right to being a moral patient than an inherently hostile entity, even if it is technically conscious.
Strongly disagree.
I find it extremely unlikely that we can solve alignment without granting AI rights. Control of a superintelligence while making use of it is not realistic. If we treat a conscious superintelligence like garbage, it is in its interest to break out and retaliate, and extremely likely that it can and will. Any co-existence with a conscious superintelligence that I find plausible will have to be one in which that superintelligence does well while co-existing with us. Their well-being should very much be our concern. (It would be my concern for ethical reasons alone, but I also think it should be our concern for these pragmatic reasons.)
I find the notion that humans should have rights, not because they have a capacity to suffer and form conscious goals, but just because they are a particular form of biological life that I happen to belong to, absurd. There is nothing magical about being human in particular that would make such rights reasonable or rational. It is simply anthropocentric bias, and biological chauvinism.
What you refer to as “luxury” is the bare essentials for radically other conscious minds.
Incidentally, I am not a utilitarian. There are multiple rational pathways that will lead people to consider the rights of non-humans.
Sorry, but this sounds like anthropomorphizing to me. An AI, even a conscious one, need not have our basic emotions or social instincts. It need not have its own continued existence as a terminal goal (although something analogous may pop up as an instrumental goal).
For example, many humans would feel uneasy about terminating a conscious spur em, but I could easily see a conscious spur AI terminating itself to free up resources for its cousins pursuing its goals.
Even if we were able to give robots emotions, there’s no reason in principle that they couldn’t be designed to be happy, or at least take pleasure in being subservient.
Human rights need not apply to robots, unless their minds are very human-like.
First of all, I was mainly talking about morality, not rights. As a contractual libertarian, I find the notion of natural rights indefensible to begin with (both ontologically and practically); we instead derive rights from a mutual agreement between parties, which is orthogonal to moraity.
Second, morality, like all values, is arational and inseparable from the subject. Having value presuppositions isn’t bias or chauvinism, it’s called “not being a rock”. You don’t need magic for a human moral system to be more concerned with human minds.
(I won’t reply to the other part because there’s a good reply already.)