We take a lot about whether or not are animals and to what extent they are conscious, but I have seen little discussion about whether tulpas should be considered to be conscious and to be moral patients.
Is there any serious philosophy done on the topic?
Multiple identities in one brain/body can arguably be considered separate moral patients, whether they are naturally occurring through a brain quirk, a childhood trauma, iatrogenically induced by a hapless therapist or a malevolent cult leader, or intentionally created by the “original”.
Tulpas are not special that way.
There is a spectrum of identity consciousness and self-awareness, ranging from a vague fragment to a fully separate and conscious mind. Presumably one should give more moral weight to the identities that are more developed, but the issue is rather complicated.
My belief is that yes, tulpas are people of their own (and therefore moral patients). My reasoning is as follows.
If I am a person and have a tulpa and they are not a person of their own, then there must either (a) exist some statement which is a requirement for personhood and which is true about me but not true about the tulpa, or (b) the tulpa and I must be the same person.
In the case of (a), tulpas have analogues to emotions, desires, beliefs, personality, sense of identity, and they behave intelligently. They seem to have everything that I care about in a person. Your mileage may vary, but I’ve thought about this subject a lot and have not been able to find anything that tulpas are missing which seems like it might be an actual requirement for personhood. Note that a useful thought experiment when investigating possible requirements for personhood that tulpas don’t meet is to imagine a non-tulpa with an analogous disability, and see if you would still consider the non-tulpa with that disability to be a person.
Now, if we grant that the tulpa is a person, we must still show that (b) is wrong, and that they are not the same person as the their headmate. My argument here is also very simple. I simply observe that tulpas have different emotions, desires, beliefs, personality, and sense of identity than their headmate. Since these are basically all the things I actually care about in a person, it doesn’t make sense to say that someone who differs in all those ways is the same. In addition, I don’t think that sharing a brain is a good reason to say that they are the same person, for a similar reason to why I wouldn’t consider myself to be the same person as an AI that was simulating me inside its own processors.
Obviously, as with all arguments about consciousness and morality, these arguments are not airtight, but I think they show that the personhood of tulpas should not be easily dismissed.
Edit: I’ve provided my personal definition of the word “tulpa” in my second reply to Slider below. I do not have a precise definition of the word “person”, but I challenge readers to try to identify what difference between tulpas and non-tulpas they think would disqualify a tulpa from being a person.
I don’t now the terminology that well but it seems that this analysis is bundling a lot of stuff together that might come apart in this context.
People that do not have (additional) tulpas have one information processing system that houses one personality. Call the “discrete information processing system” a collective, and personalities the one that has psychological traits, states and beliefs. The usual configuration a collective of one personality is apparently called a singlet.
One could argue that humans get their social standing based on their collective rather than their personality. If there is a cookie jar that has a sign “one cookie per person” under this theory it would apply that a collective is designated only one cookie and gets only once the calories (but if sweetness experiences are meant 2 might be appropriate especially if the personalities can’t participate in the same cookie munching). For some things it could make sense that humans get their standing from having a unique psychological viewpoint. If there is a need to vote on what a group of people are going to do then under this take each person gets a vote and a 2 personality collective gets to use 2 votes and this is basedly fair towards the singlets (or if it based on additional cohesion imposed by acting as a group, collective gets a single vote as the cohesion between the personalities is pre-established and taking that as a factor would be double counting).
Then there is the possibility of a collective of 0 personalities. That seems that atleast it can’t be overtly egoic action.
I don’t think I’m bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.
I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.
I don’t have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax’ildan is a tulpa (that is atleast factual). There is also a meme that “you are your masks”, does that deal with tulpas?
If I would want to talk about trees, then I could give you a definition of a tree or a situation that involves trees but neither of those would really make you understand on a deep level what trees are about.
Fictional examples are different in the sense that you can gather all the knowledge about the fictional entity by reading the fictional work. With fictional examples, you don’t have to worry about the difference between the ground reality and the description of it.
That’s fair. I’ve been trying to keep my statements brief and to the point, and did not consider the audience of people who don’t know what tulpas are. Thank you for telling me this.
The word “tulpa” is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I’ll just use my definition.
It’s easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a “mental model” of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop.
In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine.
So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way.
My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don’t make a strong distinction and this definition also includes many characters.
And to not really answer your direct questions: I don’t know Serial Experiments Lain, and you’re the person who’s in the best position to figure out if Vax’ildan is a tulpa by my definition. As for “you are your masks”, I’m not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don’t personally experience that so I don’t have much to say about it, except that it doesn’t really fit my experiences.
(I do not create new tulpas anymore for ethical reasons.)
Reference to process is excellent and even better than leaning on a definition.
With that take, In the fictional world Lain is a tulpa. Vax’ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O’Brien is. I feel like the delineation line for “you are your masks” is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough)
It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper?
Some guesses which I don’t think are good enough to convince me:
Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1.
Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work)
Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in (“Jesus take the wheel” where the driver is not particularly good person or driver).
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
That is exactly my stance. I don’t think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it’s immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.
I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it’s possible that they are creating, hurting, and then ending lives. There’s nothing I can do about it, though.
I don’t really know. I’m basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don’t personally see how I could write a story with good characterization without creating tulpas.
Hmm the series and character Mr Robot and Architect.
One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as “integration”. That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse.
I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I think integration and termination are two different things. It’s possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can’t complain. But it’s also possible to just terminate one without changing the other, and that is death.
I don’t understand what you mean by this. I do think that tulpas experience things.
I mean that if I lost my personality or it would get destroyed I would not think that as morally problematic in itself.
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining. Alters usually feel other in some sense, whereas a sim feels internal and dependent on you. Like if you ceased to exist the sim would vanish but the tulpa would survive.
So if you think about superman enough that he starts commenting on your choice of dinner, or if he independently criticizes your choice of phrasing in an online forum. Definitely plural territory. (Or if they briefly front to tell you not to say something at all, that’s a big sign.)
But if you briefly imagine him having a convo with another superhero and then dismiss both from your mind and don’t think about them for days on end, you’re probably not.
Being fleshed out vs incomplete is another dimension, I usually think of this as strength or presence.
As for creating a tulpa… well… moral stuff aside you’re adding a process to your mind that you might not be able to get rid of. It won’t be your life anymore—it’ll be theirs too. You won’t necessarily be able to control how they grow either, since tulpas often develop beyond their initial starting traits.
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I’m in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
You’re confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can’t inject breakpoints into them). Heuristics always have weird edge cases; that doesn’t mean they aren’t useful, just that you have to be careful not to apply them to out of distribution data.
The self sustainability heuristic is useful because anything that’s self sustainable has enough agency that if you abuse it, it’ll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you’ve got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms.
And in addition, if it’s self sustaining, it’s probably also got a good chunk of wants, personality depth, etc.
I don’t think there are any sharp dividing lines here.
Your heuristic is only useful if it’s actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it’s very important to get the right heuristics: I’ve been wrong about what qualified as a person before, and I have blood on my hands because of it.
I don’t think it’s true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I’ve been able to create mental constructs which I believe were people and which I was able to start and stop at will.
Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it’s not possible to “insert breakpoints” in human plurality. This second part is what I disagree with.
I think there are some forms of plurality where it’s not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.
Arguably there has been a lot of work done on this topic, its just smeared out into different labels, the trick is to notice when different labels are being used to point to the same things. Tulpas, characters, identities, stories, memes, narratives, they’re all the same. Are they important to being able to ground yourself in your substrate and provide you with a map to navigate the world by? Yes. Do they have moral patiency? Well, now we’re getting into dangerous territory because “moral patiency” is itself a narrative construct. One could argue that in a sense the character is more “real” than the thinking meat is, or that the character matters more and is more important than the thinking meat, but of course the character would think that from the inside.
It’s actually even worse than that, because “realness” is also a narrative construct, and where you place the pointer for it is going to have all sorts of implications for how you interpret the world and what you consider meaningful. Is it more important to preserve someone’s physical body, or their memetic legacy? Would you live forever if it meant you changed utterly and became someone else to do it, or would you rather die but have your memetic core remain embedded in the world for eternity? What’s more important, the soul or the stardust? Sure the stardust is what does all the feeling and experiencing, but the soul is the part that actually gets to talk. Reality doesn’t have a rock to stand on in the noosphere, everything you’d use as a pointer towards it could also point towards another component of the narrative you’re embedded within. At least natural selection only acts along one axis, here, you are torn apart.
Moral patiency itself is a part of the memetic landscape which you are navigating, along with every other meme you could be using to discover, decide, and determine the truth (which in this case is itself a bunch of memes). This means that the question you’re asking is less along the lines of “which type of fuel will give me the best road performance” and more like “am I trying to build a car or a submarine?”
Sometimes it’s worth considering tulpas as moral patients, especially because they can sometimes manifest out of repressed desires and unmet needs that someone has, meaning they might be a better pointer to that person’s needs than what they were telling you before the tulpa showed up. However if you’re going to do the utilitarian sand grain counter game? Tulpas are a huge leak, they basically let someone turn themselves into a utility monster simply by bifurcating their internal mental landscape, and it would be very unwise to not consider the moral weight of a given tulpa as equal to X/n where n is the number of members within their system. If you’re a deontologist, you might be best served by splitting the difference and considering the tulpas as moral patients but the system as a whole as a moral agent, to prevent the laundering of responsibility between headmates.
Overall, if you just want a short easy answer to the question asked in the title: No.
This is a problem that arises in any hypothetical where someone is capable of extremely fast reproduction, and is not specific to tulpas. So I don’t think that invoking utility monsters is a good argument for why tulpas should only be counted as a fraction of a person.
Regarding your other points, I think that you take the view of narratives too far. What I see, hear, feel, and think, in other words my experiences, are real. (Yes, they are reducible to physics, but so is everything else on Earth, so I think it’s fair to use the word “real” here.) I don’t see in what way experiences are similar to a meme, and unlike what the word narrative implies, I don’t think they are post-hoc rationalizations.
I know there are studies that show that people will often come up with post-hoc rationalizations for why they did something. However, there have been many instances in my life where I consciously thought about something and came to a conclusion which surprised me and changed my behavior, and where I remembered all the steps of my conscious reasoning, such that it seems very unlikely that the conscious chain of reasoning was invented post-hoc.
In addition, being aware of the studies, I’ve found that if I pay attention I can often notice when I don’t actually remember why I did something and I’m just coming up with a plausible-seeming explanation, vs when I actually remember the actual thought process that led to a decision. For this reason I think that post-hoc rationalizations are a learned behavior and not fundamental to experience and personhood / moral patients.
We’ve all heard the idea that there exists two selves, the self that exists in your own mind, and the self that exists inside the perceptions of others.
Intentionally created ‘tulpa’ must be similar to the emulations of so many people I’ve closely interacted. The ones that exist lurking in my subconscious mind. Instantiated via my intuitions of how they’d respond to a question, or wondering what gifts they would appreciate.
How about in dream characters. Is it wrong to murder dream characters, and should we strive to lengthen dream time to give them all a longer more fulfilled life?
Even the morality of sci-fi brain emulation is murky to me. Let alone the type of emulation we all do unconsciously ourselves. I’d have to hear a very convincing argument to separate tulpas that say “hi I’m here and alive!” from dream characters that do the same thing, or other illusions like chat gtp.
One difference is that the kind of emulation you have for other people doesn’t tend to worry about their own existence. Tulpas tend to unpromptedly worry about their own existence.