For other people’s reference on moderator-habits, I’m somewhat confused about how to relate to this comment, but my take was “disagree-vote, feel fairly fine about it getting very downvoted, but don’t think it’d be appropriate to moderate away on 101-content grounds.” On one hand, as a far as “not a 101 space” concerns goes, my guess is Nate isn’t modeling S-Risk and the magnitude of how bad it might be to simulate conscious minds at scale. But… also sounds like he pretty straightforwardly wouldn’t care.
I disagree, but I don’t think LW should be moderating people based on moral beliefs.
I do think there is something… annoyingly low-key-aggro? about how Nate phrases the disagreement, and if that was a longterm pattern I’d probably issue some kind of warning about that and maybe issue a rate limit. (I guess maybe this is that warning)
I feel like the comment was slightly off topic for this post. I didn’t downvote, but didn’t upvote it either. I don’t even disagree with the object-level “should [not] grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.” I just don’t think the tone is helping.
I’ll point out that the comment may not necessarily be 101-material, based on this subject being treated somewhat recently. That being said, that piece is talking primarily about non-human animals. The commenter may have been talking about very significantly dissimilar non-human minds.
Oh, to be clear I think “Conscious beings should have moral consideration ” has been extensively treated on LessWrong, it’s just not something you can ultimately ground out as “someone has obviously ‘won’ the argument.”
Someone flagged this comment for 101-contentness.
For other people’s reference on moderator-habits, I’m somewhat confused about how to relate to this comment, but my take was “disagree-vote, feel fairly fine about it getting very downvoted, but don’t think it’d be appropriate to moderate away on 101-content grounds.” On one hand, as a far as “not a 101 space” concerns goes, my guess is Nate isn’t modeling S-Risk and the magnitude of how bad it might be to simulate conscious minds at scale. But… also sounds like he pretty straightforwardly wouldn’t care.
I disagree, but I don’t think LW should be moderating people based on moral beliefs.
I do think there is something… annoyingly low-key-aggro? about how Nate phrases the disagreement, and if that was a longterm pattern I’d probably issue some kind of warning about that and maybe issue a rate limit. (I guess maybe this is that warning)
I feel like the comment was slightly off topic for this post. I didn’t downvote, but didn’t upvote it either. I don’t even disagree with the object-level “should [not] grant moral weight (let alone rights) to something completely inhuman merely based on it being conscious.” I just don’t think the tone is helping.
I’ll point out that the comment may not necessarily be 101-material, based on this subject being treated somewhat recently. That being said, that piece is talking primarily about non-human animals. The commenter may have been talking about very significantly dissimilar non-human minds.
Oh, to be clear I think “Conscious beings should have moral consideration ” has been extensively treated on LessWrong, it’s just not something you can ultimately ground out as “someone has obviously ‘won’ the argument.”