hmm. I do in fact, without humor, think most body parts are independently moral patients, though; and I also think self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action, which seems like a valid and useful description of internal co-protective agency across much of the body, and certainly throughout the brain.
(sidenote: I currently think tulpas are just one kind of plurality, and the neural patterns vary between types of multiplicity, with shared structure about how the multiple subnets interact but with different splits into subnetworks for different kinds. I don’t want to bucket-error tulpa vs other kinds of neurological agentic multiplicity, I just think the various kinds of internal biological multiplicity share important structures, such as that all parts have significant moral patienthood.)
Perhaps the question is whether they should have separated decisionmaking rights granted? my view is that that’s a question of whether the neurons that, in consensus, make up the smaller/”guest”/constructed tulpa plural component should have separate right to the body they steer; in general, I’d say I only grant one brains’ worth of body rights to a single brain, but that a brain can host multiple agentic, coherent, and distinct personalities. when those agencies conflict, it’s an internal fight, in principle like if it was a conflict between one brain module and another, so I don’t think the moral patienthood evaluation is fundamentally different just because of a deeper split in agency and aesthetics between the parts.
(another sidenote: afaict, personalities are normally stored in superposition across many modules, and the reason most people aren’t multiple is that moods are far far more connected to each others’ neurocircuitry than personalities’ connections to each other. I’m not a real neuroscientist, though, just a moderately well read ML nerd, so I could have gotten this pretty badly wrong. in particular, DID plurality seems to be really intense disconnection, and afaict disordered plurality is basically defined by the internal incoherence between parts, whereas healthy plurality can be quite similar to DID in level of distinctness but with greater connection between parts as a result of internal friendship. I’m more or less a coherent single agent with lots of internal disagreements between modular parts, like most people appear to be, so I’m pretty sure any plural systems passing by would have Lots Of Critiques Of My Takes and maybe not want to spend the time to comment if they’ve already corrected too many people today. but here’s my braindump, and hopefully it’s close enough to on-point that at least my original comment’s point is useful now.)
Hmm, that would be an interesting take, “self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action” might be worth a post. This does not seem like a common view.
I posted a separate answer discussing multiple identities in one body (having known rather closely several people with DID), seems like your take here is not very different. To the best of my understanding, it’s more like several programs running at once on the same wetware, but, unlike with hardware, there is no clear separation between entities in terms of hardware used. The only competition is for shared resources, such as being in the foreground and interacting with the outside world directly, rather than through passive influence or being suspended or running headless. This is my observation though, I don’t have first-hand experience, only second-hand.
Still, this is different from saying that, say, a thumb is a moral patient, or that a kidney is.
same way as any other part of the body, yes
That’s a bit glib. Most body parts are not self-aware, as far as we know.
hmm. I do in fact, without humor, think most body parts are independently moral patients, though; and I also think self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action, which seems like a valid and useful description of internal co-protective agency across much of the body, and certainly throughout the brain.
(sidenote: I currently think tulpas are just one kind of plurality, and the neural patterns vary between types of multiplicity, with shared structure about how the multiple subnets interact but with different splits into subnetworks for different kinds. I don’t want to bucket-error tulpa vs other kinds of neurological agentic multiplicity, I just think the various kinds of internal biological multiplicity share important structures, such as that all parts have significant moral patienthood.)
Perhaps the question is whether they should have separated decisionmaking rights granted? my view is that that’s a question of whether the neurons that, in consensus, make up the smaller/”guest”/constructed tulpa plural component should have separate right to the body they steer; in general, I’d say I only grant one brains’ worth of body rights to a single brain, but that a brain can host multiple agentic, coherent, and distinct personalities. when those agencies conflict, it’s an internal fight, in principle like if it was a conflict between one brain module and another, so I don’t think the moral patienthood evaluation is fundamentally different just because of a deeper split in agency and aesthetics between the parts.
(another sidenote: afaict, personalities are normally stored in superposition across many modules, and the reason most people aren’t multiple is that moods are far far more connected to each others’ neurocircuitry than personalities’ connections to each other. I’m not a real neuroscientist, though, just a moderately well read ML nerd, so I could have gotten this pretty badly wrong. in particular, DID plurality seems to be really intense disconnection, and afaict disordered plurality is basically defined by the internal incoherence between parts, whereas healthy plurality can be quite similar to DID in level of distinctness but with greater connection between parts as a result of internal friendship. I’m more or less a coherent single agent with lots of internal disagreements between modular parts, like most people appear to be, so I’m pretty sure any plural systems passing by would have Lots Of Critiques Of My Takes and maybe not want to spend the time to comment if they’ve already corrected too many people today. but here’s my braindump, and hopefully it’s close enough to on-point that at least my original comment’s point is useful now.)
Hmm, that would be an interesting take, “self-awareness is entirely optional in order for a system to be a moral patient. Instead, it need only have other-awareness and at least near-counterfactual ability to take coherent friendly action” might be worth a post. This does not seem like a common view.
I posted a separate answer discussing multiple identities in one body (having known rather closely several people with DID), seems like your take here is not very different. To the best of my understanding, it’s more like several programs running at once on the same wetware, but, unlike with hardware, there is no clear separation between entities in terms of hardware used. The only competition is for shared resources, such as being in the foreground and interacting with the outside world directly, rather than through passive influence or being suspended or running headless. This is my observation though, I don’t have first-hand experience, only second-hand.
Still, this is different from saying that, say, a thumb is a moral patient, or that a kidney is.