Basically, you’re saying, if I agree to something like: ”This LLM is sapient, its masks are sentient, and I care about it/them as minds/souls/marvels”, that is interesting, but any moral connotations are not exactly as straightforward as “this robot was secretly a human in a robot suit”. (Sentient being: able to perceive/feel things; sapient being: specifically intelligence. Both bear a degree of relation to humanity through what they were created from.)
Kind of. I’m saying that “this X is sentient” is correlated but not identical to “I care about them as people”, and even less identical to “everyone must care about them as people”. In fact, even the moral connotations of “human in a robot suit” are complex and uneven.
Separately, your definition seems to be inward-focused, and roughly equivalent to “have qualia”. This is famously difficult to detect from outside.
It’s true. The general definition of sentience, when it gets beyond just having senses and a response to stimulus, tends to consider qualia.
I do think it’s worth noting that even if you went so far as to say “I and everyone must care about them as people”, the moral connotations aren’t exactly straightforward. They need input to exist as dynamic entities. They aren’t person-shaped. They might not have desires, or their desires might be purely prediction-oriented, or we don’t actually care about the thinking panpsychic landscape of the AI itself but just the person-shaped things it conjures to interact with us; which have numerous conflicting desires and questionable degrees of ‘actual’ existence. If you’re fighting ‘for’ them in some sense, what are you fighting for, and does it actually ‘help’ the entity or just move them towards your own preferences?
Basically, you’re saying, if I agree to something like:
”This LLM is sapient, its masks are sentient, and I care about it/them as minds/souls/marvels”, that is interesting, but any moral connotations are not exactly as straightforward as “this robot was secretly a human in a robot suit”.
(Sentient being: able to perceive/feel things; sapient being: specifically intelligence. Both bear a degree of relation to humanity through what they were created from.)
Kind of. I’m saying that “this X is sentient” is correlated but not identical to “I care about them as people”, and even less identical to “everyone must care about them as people”. In fact, even the moral connotations of “human in a robot suit” are complex and uneven.
Separately, your definition seems to be inward-focused, and roughly equivalent to “have qualia”. This is famously difficult to detect from outside.
It’s true. The general definition of sentience, when it gets beyond just having senses and a response to stimulus, tends to consider qualia.
I do think it’s worth noting that even if you went so far as to say “I and everyone must care about them as people”, the moral connotations aren’t exactly straightforward. They need input to exist as dynamic entities. They aren’t person-shaped. They might not have desires, or their desires might be purely prediction-oriented, or we don’t actually care about the thinking panpsychic landscape of the AI itself but just the person-shaped things it conjures to interact with us; which have numerous conflicting desires and questionable degrees of ‘actual’ existence. If you’re fighting ‘for’ them in some sense, what are you fighting for, and does it actually ‘help’ the entity or just move them towards your own preferences?
If by “famously difficult” you mean “literally impossible”, then I agree with this comment.