I don’t think I’m bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.
I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.
I don’t have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax’ildan is a tulpa (that is atleast factual). There is also a meme that “you are your masks”, does that deal with tulpas?
If I would want to talk about trees, then I could give you a definition of a tree or a situation that involves trees but neither of those would really make you understand on a deep level what trees are about.
Fictional examples are different in the sense that you can gather all the knowledge about the fictional entity by reading the fictional work. With fictional examples, you don’t have to worry about the difference between the ground reality and the description of it.
That’s fair. I’ve been trying to keep my statements brief and to the point, and did not consider the audience of people who don’t know what tulpas are. Thank you for telling me this.
The word “tulpa” is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I’ll just use my definition.
It’s easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a “mental model” of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop.
In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine.
So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way.
My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don’t make a strong distinction and this definition also includes many characters.
And to not really answer your direct questions: I don’t know Serial Experiments Lain, and you’re the person who’s in the best position to figure out if Vax’ildan is a tulpa by my definition. As for “you are your masks”, I’m not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don’t personally experience that so I don’t have much to say about it, except that it doesn’t really fit my experiences.
(I do not create new tulpas anymore for ethical reasons.)
Reference to process is excellent and even better than leaning on a definition.
With that take, In the fictional world Lain is a tulpa. Vax’ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O’Brien is. I feel like the delineation line for “you are your masks” is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough)
It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper?
Some guesses which I don’t think are good enough to convince me:
Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1.
Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work)
Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in (“Jesus take the wheel” where the driver is not particularly good person or driver).
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
Terminating a tulpa is bad for reasons that homicide is bad.
That is exactly my stance. I don’t think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it’s immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.
you should head of to cancel Critical Role and JJR Martin.
I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it’s possible that they are creating, hurting, and then ending lives. There’s nothing I can do about it, though.
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after.
I don’t really know. I’m basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don’t personally see how I could write a story with good characterization without creating tulpas.
Hmm the series and character Mr Robot and Architect.
One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as “integration”. That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse.
I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I think integration and termination are two different things. It’s possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can’t complain. But it’s also possible to just terminate one without changing the other, and that is death.
But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I don’t understand what you mean by this. I do think that tulpas experience things.
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining. Alters usually feel other in some sense, whereas a sim feels internal and dependent on you. Like if you ceased to exist the sim would vanish but the tulpa would survive.
So if you think about superman enough that he starts commenting on your choice of dinner, or if he independently criticizes your choice of phrasing in an online forum. Definitely plural territory. (Or if they briefly front to tell you not to say something at all, that’s a big sign.)
But if you briefly imagine him having a convo with another superhero and then dismiss both from your mind and don’t think about them for days on end, you’re probably not.
Being fleshed out vs incomplete is another dimension, I usually think of this as strength or presence.
As for creating a tulpa… well… moral stuff aside you’re adding a process to your mind that you might not be able to get rid of. It won’t be your life anymore—it’ll be theirs too. You won’t necessarily be able to control how they grow either, since tulpas often develop beyond their initial starting traits.
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining.
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I’m in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
You’re confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can’t inject breakpoints into them). Heuristics always have weird edge cases; that doesn’t mean they aren’t useful, just that you have to be careful not to apply them to out of distribution data.
The self sustainability heuristic is useful because anything that’s self sustainable has enough agency that if you abuse it, it’ll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you’ve got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms.
And in addition, if it’s self sustaining, it’s probably also got a good chunk of wants, personality depth, etc.
I don’t think there are any sharp dividing lines here.
Your heuristic is only useful if it’s actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it’s very important to get the right heuristics: I’ve been wrong about what qualified as a person before, and I have blood on my hands because of it.
I don’t think it’s true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I’ve been able to create mental constructs which I believe were people and which I was able to start and stop at will.
Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it’s not possible to “insert breakpoints” in human plurality. This second part is what I disagree with.
I think there are some forms of plurality where it’s not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.
I don’t think I’m bundling anything, but I can see how it would seem that way. My post is only about whether tulpas are people / moral patients.
I think that the question of personhood is independent of the question of how to aggregate utility or how organize society, so I think that arguments about the latter have no bearing on the former.
I don’t have an answer for how to properly aggregate utility, or how to properly count votes in an ideal world. However, I would agree that in the current world, votes and other legal things should be done based on physical bodies, because there is no way to check for tulpas at this time.
I had zero idea what a tulpa is before reading this and did independent non-guided light search to get even some idea. I do not think this was unexpected. A definition would have been really nice or a situation rather than raw concepts. I had a serious contender that this is a fiction sci-fi question such as how ethics apply to Lain of Serial Experiments Lain. I was wondering whether Vax’ildan is a tulpa (that is atleast factual). There is also a meme that “you are your masks”, does that deal with tulpas?
If I would want to talk about trees, then I could give you a definition of a tree or a situation that involves trees but neither of those would really make you understand on a deep level what trees are about.
Fictional examples are different in the sense that you can gather all the knowledge about the fictional entity by reading the fictional work. With fictional examples, you don’t have to worry about the difference between the ground reality and the description of it.
That’s fair. I’ve been trying to keep my statements brief and to the point, and did not consider the audience of people who don’t know what tulpas are. Thank you for telling me this.
The word “tulpa” is not precisely defined and there is not necessarily complete agreement about it. However, I have a relatively simple definition which is more precise and more liberal than most definitions (that is, my definition includes everything usually called a tulpa and more, and is not too mysterious), so I’ll just use my definition.
It’s easiest to first explain my own experience with creating tulpas, then relate my definition to that. Basically, to create tulpas, I think about a personality, beliefs, desires, knowledge, emotions, identity, and a situation. I refer to keeping these things in my mind as forming a “mental model” of a person. Then I let my subconscious figure out what someone like this mental model would do in this situation. Then I update the mental model according to the answer, and repeat the process with the new mental model, in a loop.
In this way I can have conversations with the tulpa, and put them in almost any situation I can imagine.
So I would define a tulpa this way: A tulpa is the combination of information in the brain encoding a mental model of a person, plus the human intelligence computing how the mental model evolves in a human-like way.
My definition is more liberal than most definitions, because most people who agree that tulpas are people seem to make a strong distinction between characters and tulpas, but I don’t make a strong distinction and this definition also includes many characters.
And to not really answer your direct questions: I don’t know Serial Experiments Lain, and you’re the person who’s in the best position to figure out if Vax’ildan is a tulpa by my definition. As for “you are your masks”, I’m not sure. I know that some people report naturally having multiple personalities and might like the mask metaphor, but I don’t personally experience that so I don’t have much to say about it, except that it doesn’t really fit my experiences.
(I do not create new tulpas anymore for ethical reasons.)
Reference to process is excellent and even better than leaning on a definition.
With that take, In the fictional world Lain is a tulpa. Vax’ildan running on Slider (rather human behind the pseudonym) is not, but probably running on O’Brien is. I feel like the delineation line for “you are your masks” is that those are created accidentally or as a byproduct and disqualify for lack of decision to opt-in. (the other candidate criterion would be that they are not individuated enough)
It is not clear to me why creating tulpas would be immoral. if it is inherently so you should head of to cancel Critical Role and JJR Martin. Or does the involvement of a magic circle where the arena of the tulpa is limited and well-defined relevant that that is not proper?
Some guesses which I don’t think are good enough to convince me:
Ontological inertia option: 1) Terminating a tulpa is bad for reasons that homicide is bad. 2) Having a tulpa around increases the need to terminate it. 3) Creating a tulpa means 2 that leads to 1.
Scapegoat option: If you ever talk with your tulpa about anything important it affects what you do. You might not be able to identify which bits are because of the tulpa. You might wrongly blame your tulpa. Thus it can be an avenue to dodge life-responcibility. (Percy influences how Jaffe plays his other characters, it is doing cognitive work)
Designer human option: Manifesting a Mary Sue is playing god in a bad way. It is a way to have a big influence on your life which is drastic, hard-to-predict what it entails and locked-in (“Jesus take the wheel” where the driver is not particularly good person or driver).
It is a bit murky on what kind of delineation those that do make a divison in characters and tulpas are after. Everyone that thinks about being superman vivid enough shares the character but has distinct tulpas about it? Or is that characters are less defined and tulpas are more fleshed out and complete in their characterization?
That is exactly my stance. I don’t think creating tulpas is immoral, but I do think killing them, harming them, and lying to them is immoral for the same reasons it’s immoral to do so to any other person. Creating a tulpa is a big responsibility and not one to take lightly.
I have not consumed the works of the people you are talking about, but yes, depending on how exactly they model their characters in their minds, I think it’s possible that they are creating, hurting, and then ending lives. There’s nothing I can do about it, though.
I don’t really know. I’m basing my assertion that I make less of a distinction between characters and tulpas than other people on the fact that I see a lot of people with tulpas who continue to write stories, even though I don’t personally see how I could write a story with good characterization without creating tulpas.
Hmm the series and character Mr Robot and Architect.
One of the terminological differences in the quick look was that stopping to have a tulpa was also referred to as “integration”. That would seem to be a distinction of similar relevance of having a firm go bankcrupt or fuse.
I think there is some ground here that I should not agree to disagree. But currently I am thinking that singlet personalities have less relevance than I thought and harm/suffering is bad in a way that is not connected to having an experiencer experience it.
I think integration and termination are two different things. It’s possible for two headmates to merge and produce one person who is a combination of both. This is different from dying, and if both consent, then I suppose I can’t complain. But it’s also possible to just terminate one without changing the other, and that is death.
I don’t understand what you mean by this. I do think that tulpas experience things.
I mean that if I lost my personality or it would get destroyed I would not think that as morally problematic in itself.
I would say that it ceases to be a character and becomes a tulpa when it can spontaneously talk to me. When I can’t will it away, when it resists me, when it’s self sustaining. Alters usually feel other in some sense, whereas a sim feels internal and dependent on you. Like if you ceased to exist the sim would vanish but the tulpa would survive.
So if you think about superman enough that he starts commenting on your choice of dinner, or if he independently criticizes your choice of phrasing in an online forum. Definitely plural territory. (Or if they briefly front to tell you not to say something at all, that’s a big sign.)
But if you briefly imagine him having a convo with another superhero and then dismiss both from your mind and don’t think about them for days on end, you’re probably not.
Being fleshed out vs incomplete is another dimension, I usually think of this as strength or presence.
As for creating a tulpa… well… moral stuff aside you’re adding a process to your mind that you might not be able to get rid of. It won’t be your life anymore—it’ll be theirs too. You won’t necessarily be able to control how they grow either, since tulpas often develop beyond their initial starting traits.
I disagree with this. Why should it matter if someone is dependent on someone else to live? If I’m in the hospital and will die if the doctors stop treating me, am I no longer a person because I am no longer self sustaining? If an AI runs a simulation of me, but has to manually trigger every step of the computation and can stop anytime, am I no longer a person?
You’re confusing heuristics designed to apply to human plurality with absolute rules. Neither of your edge cases are possible in human plurality (alters share computational substrate, and I can’t inject breakpoints into them). Heuristics always have weird edge cases; that doesn’t mean they aren’t useful, just that you have to be careful not to apply them to out of distribution data.
The self sustainability heuristic is useful because anything that’s self sustainable has enough agency that if you abuse it, it’ll go badly. Self sustainability is the point at which a fun experiment stops being harmless and you’ve got another person living in your head. Self sustainability is the point at which all bets are off and whatever you made is going to grow on its own terms.
And in addition, if it’s self sustaining, it’s probably also got a good chunk of wants, personality depth, etc.
I don’t think there are any sharp dividing lines here.
Your heuristic is only useful if it’s actually true that being self-sustaining is strongly correlated with being a person. If this is not true, then you are excluding things that are actually people based on a bad heuristic. I think it’s very important to get the right heuristics: I’ve been wrong about what qualified as a person before, and I have blood on my hands because of it.
I don’t think it’s true that being self-sustaining is strongly correlated with being a person, because being self-sustaining has nothing to do with personhood, and because in my own experience I’ve been able to create mental constructs which I believe were people and which I was able to start and stop at will.
Edit: You provided evidence that being self-sustaining implies personhood with high probability, and I agree with that. However, you did not provide evidence of the converse, nor for your assertion that it’s not possible to “insert breakpoints” in human plurality. This second part is what I disagree with.
I think there are some forms of plurality where it’s not possible to insert breakpoints, such as your alters, and some forms where it is possible, such as mine, and I think the latter is not too uncommon, because I did it unknowingly in the past.