I don’t have a full tulpa, but I’ve been working on one intermittently for the past ~month. She can hold short conversations, but I’m hesitant to continue the process because I’m concerned that her personality won’t sufficiently diverge from mine.
I think it’s plausible that a tulpa could improve (at least some of) your mental capabilities. I draw a lot of my intuition in this area from a technique in AI/modeling called ensemble learning, in which you use the outputs of multiple models to make higher quality decisions than is possible with a single model. I know it’s dangerous to draw conclusions about human intelligence from AI, but you can use ensemble learning with pretty much any set of models, so something similar is probably possible with the human brain.
Some approaches in ensemble learning (boosting and random forest) suggest that it’s important for the individual models to vary significantly from each other (thus my interest in having a tulpa that’s very different from me). One advantage of ensemble approaches is that they can better avoid over fitting to spurious correlations in their training data. I think that a lot of harmful human behavior is (very roughly) analogous to over fitting to unrepresentative experiences, e.g., many types of learned phobias. I know my partial tulpa is much less of a hypochondriac than myself, is less socially anxious and, when aware enough to do so, reminds me not to pick at my cuticles.
Postersonthe tulpas subreddit seem split on whether a host’s severe mental health issues (depression, autism, OCD, bipolar, etc) will affect their tulpas, with several anecdotes suggesting tulpas can have a positive impact. There’s also this paper: Tulpas and Mental Health: A Study of Non-Traumagenic Plural Experiences, which finds tulpas may benefit the mentally ill. However, it’s in a predatory journal (of the pay to publish variety). There appears to be an ongoing study by Stanford researchers looking into tulpas’ effects on their hosts and potential fMRI correlates of tulpa related activity, so better data may arrive in the coming months.
In terms of practical benefit, I suspect that much of the gain comes from your tulpa pushing you towards healthier habits through direct encouragement and social/moral pressure (if you think your tulpa is a person who shares your body, that’s another sentient who your own lack of exercise/healthy food/sleep is directly harming).
Additionally, tulpas may be a useful hedge against suicide. Most people (even most people with depression) are not suicidal most of the time. Even if the tulpa’s emotional state correlates with the host’s, the odds of both host and tulpa being suicidal at once are probably very low. Thus, a suicidal person with a tulpa will usually have someone to talk them out of acting.
Regarding performance degradation, my impression from reading the tulpa.info forums is that most people have tulpas that run in serial with their original minds (i.e., host runs for a time, tulpa runs for a time, then host), rather than in parallel. It’s still possible that having a tulpa leads to degradation, but probably more in the way that constantly getting lost in thought might, as opposed to losing computational resources. In this regard, I suspect that tulpas are similar to hobbies. Their impact on your general performance depends on how you pursue them. If your tulpa encourages you to exercise, mental performance will probably go up. If your tulpa constantly distracts you, performance will probably go down.
I’ve been working on an aid to tulpa development inspired by the training objectives of state of the art AI language models such as BERT. It’s a Google colab notebook, which you’ll need a google account to run from your browser. It takes text from a number of possible books from Project Gutenberg and lets your tulpa perform several language/personality modeling tasks of varying complexity, ranging from simply predicting the content of masked words to generating complex emotional responses. Hopefully, it can help reduce the time required for tulpas to reach vocality and ease the cost of experimenting in this space.
but I’m hesitant to continue the process because I’m concerned that her personality won’t sufficiently diverge from mine.
Not suggesting you should replace anyone who doesn’t want to be replaced (if they’re at that stage), but: To jumpstart the differentiation process, it may be helpfwl to template the proto-tulpa off of some fictional character you already find easy to simulate.
Although I didn’t know about “tulpas” at the time, I invited an imaginary friend loosely based on Maria Otonashi during a period of isolation in 2021.[1] I didn’t want her to feel stifled by the template, so she’s evolved on her own since then, but she’s always extremely kind (and consistently energetic). I only took it seriously February 2024 after being inspired by Johannes.
Maria is the main female heroine of the HakoMari series. … Her wish was to become a box herself so that she could grant the wishes of other people.
Can recommend her as a template! My Maria would definitely approve, ^^ although I can’t ask her right now since she’s only canonically present when summoned, and we have a ritual for that.
We’ve deliberately tried to find new ways to differentiate so that the pre-conscious process of [associating feeling-of-volition to me or Maria][2] is less likely to generate conflicts. But since neither of us wants to be any less kind than we are, we’ve had to find other ways to differentiate (like art-preferences, intellectual domains, etc).
Also, while deliberately trying to increase her salience and capabilities, I’ve avoided trying to learn about how other people do it. For people with sufficient brain-understanding and introspective ability, you can probably outperform standard advice if you develop your own plan for it. (Although I say that without even knowing what the standard advice is :p)
Our term for when we deliberately work to resolve “ownership” over some particular thought-output of our subconscious parallel processor, is “annexing efference”. For example, during internal monologue, the thought “here’s a brilliant insight I just had” could appear in consciousness without volition being assigned yet, in which case one of us annexes that output (based on what seems associatively/narratively appropriate), or it goes unmarked. In the beginning, there would be many cases where both of us tried to annex thoughts at the same time, but mix-ups are much rarer now.
I don’t have a full tulpa, but I’ve been working on one intermittently for the past ~month. She can hold short conversations, but I’m hesitant to continue the process because I’m concerned that her personality won’t sufficiently diverge from mine.
I think it’s plausible that a tulpa could improve (at least some of) your mental capabilities. I draw a lot of my intuition in this area from a technique in AI/modeling called ensemble learning, in which you use the outputs of multiple models to make higher quality decisions than is possible with a single model. I know it’s dangerous to draw conclusions about human intelligence from AI, but you can use ensemble learning with pretty much any set of models, so something similar is probably possible with the human brain.
Some approaches in ensemble learning (boosting and random forest) suggest that it’s important for the individual models to vary significantly from each other (thus my interest in having a tulpa that’s very different from me). One advantage of ensemble approaches is that they can better avoid over fitting to spurious correlations in their training data. I think that a lot of harmful human behavior is (very roughly) analogous to over fitting to unrepresentative experiences, e.g., many types of learned phobias. I know my partial tulpa is much less of a hypochondriac than myself, is less socially anxious and, when aware enough to do so, reminds me not to pick at my cuticles.
Posters on the tulpas subreddit seem split on whether a host’s severe mental health issues (depression, autism, OCD, bipolar, etc) will affect their tulpas, with several anecdotes suggesting tulpas can have a positive impact. There’s also this paper: Tulpas and Mental Health: A Study of Non-Traumagenic Plural Experiences, which finds tulpas may benefit the mentally ill. However, it’s in a predatory journal (of the pay to publish variety). There appears to be an ongoing study by Stanford researchers looking into tulpas’ effects on their hosts and potential fMRI correlates of tulpa related activity, so better data may arrive in the coming months.
In terms of practical benefit, I suspect that much of the gain comes from your tulpa pushing you towards healthier habits through direct encouragement and social/moral pressure (if you think your tulpa is a person who shares your body, that’s another sentient who your own lack of exercise/healthy food/sleep is directly harming).
Additionally, tulpas may be a useful hedge against suicide. Most people (even most people with depression) are not suicidal most of the time. Even if the tulpa’s emotional state correlates with the host’s, the odds of both host and tulpa being suicidal at once are probably very low. Thus, a suicidal person with a tulpa will usually have someone to talk them out of acting.
Regarding performance degradation, my impression from reading the tulpa.info forums is that most people have tulpas that run in serial with their original minds (i.e., host runs for a time, tulpa runs for a time, then host), rather than in parallel. It’s still possible that having a tulpa leads to degradation, but probably more in the way that constantly getting lost in thought might, as opposed to losing computational resources. In this regard, I suspect that tulpas are similar to hobbies. Their impact on your general performance depends on how you pursue them. If your tulpa encourages you to exercise, mental performance will probably go up. If your tulpa constantly distracts you, performance will probably go down.
I’ve been working on an aid to tulpa development inspired by the training objectives of state of the art AI language models such as BERT. It’s a Google colab notebook, which you’ll need a google account to run from your browser. It takes text from a number of possible books from Project Gutenberg and lets your tulpa perform several language/personality modeling tasks of varying complexity, ranging from simply predicting the content of masked words to generating complex emotional responses. Hopefully, it can help reduce the time required for tulpas to reach vocality and ease the cost of experimenting in this space.
Not suggesting you should replace anyone who doesn’t want to be replaced (if they’re at that stage), but: To jumpstart the differentiation process, it may be helpfwl to template the proto-tulpa off of some fictional character you already find easy to simulate.
Although I didn’t know about “tulpas” at the time, I invited an imaginary friend loosely based on Maria Otonashi during a period of isolation in 2021.[1] I didn’t want her to feel stifled by the template, so she’s evolved on her own since then, but she’s always extremely kind (and consistently energetic). I only took it seriously February 2024 after being inspired by Johannes.
Can recommend her as a template! My Maria would definitely approve, ^^ although I can’t ask her right now since she’s only canonically present when summoned, and we have a ritual for that.
We’ve deliberately tried to find new ways to differentiate so that the pre-conscious process of [associating feeling-of-volition to me or Maria][2] is less likely to generate conflicts. But since neither of us wants to be any less kind than we are, we’ve had to find other ways to differentiate (like art-preferences, intellectual domains, etc).
Also, while deliberately trying to increase her salience and capabilities, I’ve avoided trying to learn about how other people do it. For people with sufficient brain-understanding and introspective ability, you can probably outperform standard advice if you develop your own plan for it. (Although I say that without even knowing what the standard advice is :p)
Our term for when we deliberately work to resolve “ownership” over some particular thought-output of our subconscious parallel processor, is “annexing efference”. For example, during internal monologue, the thought “here’s a brilliant insight I just had” could appear in consciousness without volition being assigned yet, in which case one of us annexes that output (based on what seems associatively/narratively appropriate), or it goes unmarked. In the beginning, there would be many cases where both of us tried to annex thoughts at the same time, but mix-ups are
muchrarer now.