So you would determine personhood based on ‘rich conscious experience’ which appears to be related to ‘rich qualia’, compassion, and personal identity.
But these are only some of the qualities? Which of these are necessary and or sufficient?
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
Compassion in humans is based on empathy which has specific genetic components that are neurotypical but not strict human universals. For example, from wikipedia:
“Research suggests that 85% of ASD (autistic-spectrum disorder) individuals have alexithymia,[52] which involves not just the inability to verbally express emotions, but specifically the inability to identify emotional states in self or other”
Not all humans have the same emotional circuitry, and the specific circuity involved in empathy and shared/projected emotions are neurotypical but not universal. Lacking empathy, compassion is possible only in an abstract sense, but an AI lacking emotional circuitry would be equally able to understand compassion and undertake altruistic behavior, but that is different from directly experiencing empathy at the deep level—what you may call ‘qualia’.
Likewise, from what I’ve read, depending on the definition, qualia are either phlogiston or latent subverbal and largely sub-conscious associative connections between and underlying all of immediate experience. They are a necessary artifact of deep connectivist networks, and our AGI’s are likely to share them. (for example, the experience of red wavelength light has a complex subconscious associative trace that is distinctly different than blue wavelength light—and this is completely independent of whatever neural/audio code is associated with that wavelength of light—such as “red” or “blue”.) But I don’t see them as especially important.
Personal Identity is important, but any AGI of interest is necessarily going to have that by default.
But these are only some of the qualities? Which of these are necessary and or sufficient?
I don’t know in detail or certainty. These are probably not all-inclusive. Or it might all come down to qualia.
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
If Omega told me only those things? I’d probably save the being with compassion, but that’s a pragmatic concern about what the compassionless one might do, and a very low information guess at that. If I knew that no other net harm would come from my choice, I’d probably save the one with qualia. (and there I’m assuming it has a positive experience)
I’d be fine with an AI that didn’t have direct empathic experience but reliably did good things.
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But I also think it possible that Human qualia is as varied as just about everything else, and there are p-zombies going through life occasionally wondering what the hell is wrong with these delusional people who are actually just qualia-rich. It could also vary individually by specific senses.
So I’m very hesitant to say that p-zombies are nonpersons, because it seems like with a little more knowledge, it would be an easy excuse to kill or enslave a subset of humans, because “They don’t really feel anything.”
I might need to clarify my thinking on personal identity, because I’m pretty sure I’d try to avoid it in FAI. (and it too is probably twisty)
A simplification of personhood I thought of this morning: If you knew more about the entity, would you value them the way you value a friend? Right now language is a big part of getting to know people, but in principle examining their brain directly gives you all the relevant info.
This can me made more objective by looking across values of all humanity, which will hopefully cover people I would find annoying but who still deserve to live. (and you could lower the bar from ‘befriend’ to ‘not kill’)
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But do you accept that “what you experience when you see red” has a cogent physical explanation?
If you do, then you can objectively understand “what you experience when you see red” by studying computational neuroscience.
My explanation involving “complex subconscious associative traces” is just a label for my current understanding. My main point was that whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
This doesn’t make qualia magical or even all that important.
To the extent that qualia are real, even ants have qualia to an extent.
I might need to clarify my thinking on personal identity
Based on my current understanding of personal identity, I suspect that it’s impossible in principle to create an interesting AGI that doesn’t have personal identity.
But do you accept that “what you experience when you see red” has a cogent physical explanation?
Yes, so much so that I think
whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
This doesn’t make qualia magical or even all that important.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.
So you would determine personhood based on ‘rich conscious experience’ which appears to be related to ‘rich qualia’, compassion, and personal identity.
But these are only some of the qualities? Which of these are necessary and or sufficient?
For example, if you absolutely had too choose between the lives of two beings, one who had zero compassion but full ‘qualia’, and the other the converse, who would you pick?
Compassion in humans is based on empathy which has specific genetic components that are neurotypical but not strict human universals. For example, from wikipedia:
“Research suggests that 85% of ASD (autistic-spectrum disorder) individuals have alexithymia,[52] which involves not just the inability to verbally express emotions, but specifically the inability to identify emotional states in self or other”
Not all humans have the same emotional circuitry, and the specific circuity involved in empathy and shared/projected emotions are neurotypical but not universal. Lacking empathy, compassion is possible only in an abstract sense, but an AI lacking emotional circuitry would be equally able to understand compassion and undertake altruistic behavior, but that is different from directly experiencing empathy at the deep level—what you may call ‘qualia’.
Likewise, from what I’ve read, depending on the definition, qualia are either phlogiston or latent subverbal and largely sub-conscious associative connections between and underlying all of immediate experience. They are a necessary artifact of deep connectivist networks, and our AGI’s are likely to share them. (for example, the experience of red wavelength light has a complex subconscious associative trace that is distinctly different than blue wavelength light—and this is completely independent of whatever neural/audio code is associated with that wavelength of light—such as “red” or “blue”.) But I don’t see them as especially important.
Personal Identity is important, but any AGI of interest is necessarily going to have that by default.
I don’t know in detail or certainty. These are probably not all-inclusive. Or it might all come down to qualia.
If Omega told me only those things? I’d probably save the being with compassion, but that’s a pragmatic concern about what the compassionless one might do, and a very low information guess at that. If I knew that no other net harm would come from my choice, I’d probably save the one with qualia. (and there I’m assuming it has a positive experience)
I’d be fine with an AI that didn’t have direct empathic experience but reliably did good things.
I don’t see how “complex subconscious associative trace” explains what I experience when I see red.
But I also think it possible that Human qualia is as varied as just about everything else, and there are p-zombies going through life occasionally wondering what the hell is wrong with these delusional people who are actually just qualia-rich. It could also vary individually by specific senses.
So I’m very hesitant to say that p-zombies are nonpersons, because it seems like with a little more knowledge, it would be an easy excuse to kill or enslave a subset of humans, because “They don’t really feel anything.”
I might need to clarify my thinking on personal identity, because I’m pretty sure I’d try to avoid it in FAI. (and it too is probably twisty)
A simplification of personhood I thought of this morning: If you knew more about the entity, would you value them the way you value a friend? Right now language is a big part of getting to know people, but in principle examining their brain directly gives you all the relevant info.
This can me made more objective by looking across values of all humanity, which will hopefully cover people I would find annoying but who still deserve to live. (and you could lower the bar from ‘befriend’ to ‘not kill’)
But do you accept that “what you experience when you see red” has a cogent physical explanation?
If you do, then you can objectively understand “what you experience when you see red” by studying computational neuroscience.
My explanation involving “complex subconscious associative traces” is just a label for my current understanding. My main point was that whenever you self-reflect and think about your own cognitive process underlying experience X, it will always necessarily differ from any symbolic/linguistic version of X.
This doesn’t make qualia magical or even all that important.
To the extent that qualia are real, even ants have qualia to an extent.
Based on my current understanding of personal identity, I suspect that it’s impossible in principle to create an interesting AGI that doesn’t have personal identity.
Yes, so much so that I think
Might be wrong, it might be the case that thinking precisely about a process that generates a qualia would let one know exactly what the qualia ‘felt like’. This would be interesting to say the least, even if my brain is only big enough to think precisely about ant qualia.
The fact that something is a physical process doesn’t mean it’s not important. The fact that I don’t know the process makes it hard for me to decide how important it is.
The link lost me at “The fact is that the human mind (and really any functional mind) has a strong sense of self-identity simply because it has obvious evolutionary value. ” because I’m talking about non-evolved minds.
Consider two different records: One is a memory you have that commonly guides your life. Another is the last log file you deleted. They might both be many megabytes detailing the history on an entity, but the latter one just doesn’t matter anymore.
So I guess I’d want to create FAI that never integrates any of it’s experiences into it self in a way that we (or it) would find precious, or unique and meaningfully irreproducible.
Or at least not valuable in a way other than being event logs from the saving of humanity.
This is the longest reply/counter reply set of postings I’ve ever seen, with very few (less than 5?) branches. I had to click ‘continue reading’ 4 or 5 times to get to this post. Wow.
My suggestion is to take it to email or instant messaging way before reaching this point.
While I was doing it, I told myself I’d come back later and add edits with links to the point in the sequences that cover what I’m talking about. If I did that, would it be worth it?
This was partly a self-test to see if I could support my conclusions with my own current mind, or if I was just repeating past conclusions.
Doubtful, unless it’s useful to you for future reference.
It’s only a concern about initial implementation. Once the things get rolling, FAI is just another pattern in the world, so it optimizes itself according to the same criteria as everything else.