As someone with personal experience with a tulpa, I agree with most of this.
I estimates it’s ontological status to be similar to a video game NPC, recurring dream character, or schizophrenic hallucination.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how “well-realized” they are.
I estimates a well developed tulpas moral status to be similar to that of a newborn infant, late-stage alzheimer’s victim, dolphin, or beloved family pet dog.
I have no idea what a tulpa’s moral status is, besides not less than a fictional character and not more than a typical human.
I estimate it’s power over reality to be similar to a human (with lower intelligence than their host) locked in a box and only able to communicate with one specific other human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
As someone with personal experience with a tulpa, I agree with most of this.
I agree with the last two, but I think a video game NPC has a different ontological status than any of those. I also believe that schizophrenic hallucinations and recurring dream characters (and tulpas) can probably cover a broad range of ontological possibilities, depending on how “well-realized” they are.
I have no idea what a tulpa’s moral status is, besides not less than a fictional character and not more than a typical human.
I would expect most of them to have about the same intelligence, rather than lower intelligence.
You are probably counting more properties things can vary under as “ontological”. I’m mostly doing a software vs. hardware, need to be puppeteered vs. automatic, and able to interact with environment vs. stuck in a simulation, here.
I’m basing the moral status largely on “well realized”, “complex” and “technically sentient” here. You’ll notice all my example ALSO has the actual utility function multiplier at “unknown”.
Most tulpas probably have almost exactly the same intelligence as their host, but not all of it stacks with the host, and thus count towards it’s power over reality.
Ah. I see what you mean. That makes sense.