I think Eliezer’s belief (which feels plausible although I’m certainly still confused about it), is that qualia comes about when you have an algorithm that models itself modeling itself (or, something in that space).
I think this does imply that there are limits on what you can have an intelligent system do without having qualia, but seems like there’s a lot you could have it do if you’re careful about how to break it into subsystems. I think there’s also plausible control over what sorts of qualia it has, and at the very least you can probably design that to avoid it experiencing suffering in morally reprehensible ways.
I think my argument was misunderstood, so I’ll unpack.
There are 2 claims here, both are problematic
1.) ‘qualia’ comes about from some brain feature (ie level of self-modeling recursion)
2.) only thinking systems with this special ‘qualia’ deserve personhood
Either A.) the self-modelling recursion thing is actually a necessary/useful component of or unavoidable side-effect of intelligence, or B.) some humans probably don’t have it: because if A.) is false, then it is quite unlikely that evolution would conserve the feature uniformly. Thus 2 is problematic as it implies not all humans have the ‘qualia’.
If this ‘qualia’ isn’t an important feature or necessary side effect, then in the future we can build AGI in sims indistinguishable from ourselves, but lacking ‘qualia’, and nobody would notice this lacking. Thus it is either an important feature or necessary side effect or we have P-zombies (ie belief in qualia is equivalent to accepting P-zombies).
“only thinking systems with this special ‘qualia’ deserve personhood”
I’m not sure if this is cruxy for your point, but the word “deserves” here has a different type signature from the argument I’m making. “Only thinking systems with qualia are capable of suffering” is a gearsy mechanistic statement (which you might combine with moral/value statements of “creating things that can suffer that have to do what you say is bad”. The way you phrased it skipped over some steps that seemed potentially important)
I think I disagree with your framing on a couple levels:
I think it is plausible that some humans lack qualia (we might still offer “moral personhood status” to all humans because running qualia-checks isn’t practical, and it’s useful for Cooperation Morality (rather than Care Morality) to treat all humans, perhaps even all existing cooperate-able beings, as moral persons). i.e. there’s more than one reason to give someone moral personhood
it’s also plausible to me that evolution does select for the same set of qualia features across humans, but that building an AGI gives you a level of control and carefulness that evolution didn’t have.
I’m not 100% sure I get what claim you’re making though or exactly what the argument is about. But I think I’d separately be willing to bite multiple bullets you seem to be pointing at.
(Based on things like ‘not all humans have visual imagination’, I think in fact probably humans vary in the quantity/quality of their qualia, and also people might vary over time on how they experience qualia. i.e. you might not have it if you’re not actively paying attention. It still seems probably useful to ascribe something personhood-like to people. I agree this has some implications many people would find upsetting.)
i.e. there’s more than one reason to give someone moral personhood
Sure, but then at that point you are eroding the desired moral distinction. In the original post moral personhood status was solely determined by ‘qualia’.
Brain inspired AGI is near, and if you ask such an entity about its ‘qualia’, it will give responses indistinguishable from a human. And if you inspect it’s artificial neurons, you’ll see the same familiar functionally equivalent patterns of activity as in biological neurons.
I think Eliezer’s belief (which feels plausible although I’m certainly still confused about it), is that qualia comes about when you have an algorithm that models itself modeling itself (or, something in that space).
I think this does imply that there are limits on what you can have an intelligent system do without having qualia, but seems like there’s a lot you could have it do if you’re careful about how to break it into subsystems. I think there’s also plausible control over what sorts of qualia it has, and at the very least you can probably design that to avoid it experiencing suffering in morally reprehensible ways.
I think my argument was misunderstood, so I’ll unpack.
There are 2 claims here, both are problematic 1.) ‘qualia’ comes about from some brain feature (ie level of self-modeling recursion) 2.) only thinking systems with this special ‘qualia’ deserve personhood
Either A.) the self-modelling recursion thing is actually a necessary/useful component of or unavoidable side-effect of intelligence, or B.) some humans probably don’t have it: because if A.) is false, then it is quite unlikely that evolution would conserve the feature uniformly. Thus 2 is problematic as it implies not all humans have the ‘qualia’.
If this ‘qualia’ isn’t an important feature or necessary side effect, then in the future we can build AGI in sims indistinguishable from ourselves, but lacking ‘qualia’, and nobody would notice this lacking. Thus it is either an important feature or necessary side effect or we have P-zombies (ie belief in qualia is equivalent to accepting P-zombies).
I’m not sure if this is cruxy for your point, but the word “deserves” here has a different type signature from the argument I’m making. “Only thinking systems with qualia are capable of suffering” is a gearsy mechanistic statement (which you might combine with moral/value statements of “creating things that can suffer that have to do what you say is bad”. The way you phrased it skipped over some steps that seemed potentially important)
I think I disagree with your framing on a couple levels:
I think it is plausible that some humans lack qualia (we might still offer “moral personhood status” to all humans because running qualia-checks isn’t practical, and it’s useful for Cooperation Morality (rather than Care Morality) to treat all humans, perhaps even all existing cooperate-able beings, as moral persons). i.e. there’s more than one reason to give someone moral personhood
it’s also plausible to me that evolution does select for the same set of qualia features across humans, but that building an AGI gives you a level of control and carefulness that evolution didn’t have.
I’m not 100% sure I get what claim you’re making though or exactly what the argument is about. But I think I’d separately be willing to bite multiple bullets you seem to be pointing at.
(Based on things like ‘not all humans have visual imagination’, I think in fact probably humans vary in the quantity/quality of their qualia, and also people might vary over time on how they experience qualia. i.e. you might not have it if you’re not actively paying attention. It still seems probably useful to ascribe something personhood-like to people. I agree this has some implications many people would find upsetting.)
Sure, but then at that point you are eroding the desired moral distinction. In the original post moral personhood status was solely determined by ‘qualia’.
Brain inspired AGI is near, and if you ask such an entity about its ‘qualia’, it will give responses indistinguishable from a human. And if you inspect it’s artificial neurons, you’ll see the same familiar functionally equivalent patterns of activity as in biological neurons.