(Albeit recent experience with trolls makes me think that no insight enabling conscious simulations should ever be published; people would write suffering conscious simulations and run them just to show off… how confident they were that the consciousness theory was wrong, or something. I have a newfound understanding of the utter… do-anything-ness of trolls. This potentially makes it hard to publicly check some parts of the reasoning behind a nonperson predicate.)
At least for now, it’d take a pretty determined troll who could build an em for the sole purpose of being a terrible person. Not saying some humanity-first movement mightn’t pull it off, but by that point you could hopefully have legal recognition (assuming there’s no risk or accidental fooming and they pass the Turing test.)
I don’t think we’re talking ems, we’re talking conscious algorithms which aren’t necessarily humanlike or even particularly intelligent.
And as for the Turing Test, one oughtn’t confuse consciousness with intelligence. A 6-year old human child couldn’t pass off as an adult human, but we still believe the child to be conscious, and my own memories indicate that I indeed was at that age.
Well, I think consciousness, intelligence and personhood are sliding scales anyway, so I may be imagining the output of a Nonperson Predicate somewhat differently to LW norm. OTOH, I guess it’s not a priori impossible that a simple human-level AI could fit on something avvailable to the public, and such an insight would be … risky, yeah. Upvoted.
First of all, I also believe that consciousness is most probably a sliding scale.
Secondly, again you just used “human-level” without specifying human-level at what, at intelligence or at consciousness; as such I’m not sure whether I actually communicated adequately my point that we’re not discussing intelligence here, but just consciousness.
At least for now, it’d take a pretty determined troll who could build an em for the sole purpose of being a terrible person. Not saying some humanity-first movement mightn’t pull it off, but by that point you could hopefully have legal recognition (assuming there’s no risk or accidental fooming and they pass the Turing test.)
I don’t think we’re talking ems, we’re talking conscious algorithms which aren’t necessarily humanlike or even particularly intelligent.
And as for the Turing Test, one oughtn’t confuse consciousness with intelligence. A 6-year old human child couldn’t pass off as an adult human, but we still believe the child to be conscious, and my own memories indicate that I indeed was at that age.
Well, I think consciousness, intelligence and personhood are sliding scales anyway, so I may be imagining the output of a Nonperson Predicate somewhat differently to LW norm. OTOH, I guess it’s not a priori impossible that a simple human-level AI could fit on something avvailable to the public, and such an insight would be … risky, yeah. Upvoted.
First of all, I also believe that consciousness is most probably a sliding scale.
Secondly, again you just used “human-level” without specifying human-level at what, at intelligence or at consciousness; as such I’m not sure whether I actually communicated adequately my point that we’re not discussing intelligence here, but just consciousness.
Well, they do seem to be correlated in any case. However, I was referring to consciousness (whatever that is.)