Tulpa References/Discussion
There have been a number of discussions here on LessWrong about “tulpas”, but it’s been scattered about with no central thread for the discussion. So I thought I would put this up here, along with a centralized list of reliable information sources, just so we all stay on the same page.
Tulpas are deliberately created “imaginary friends” which in many ways resemble separate, autonomous minds. Often, the creation of a tulpa is coupled with deliberately induced visual, auditory, and/or tactile hallucinations of the being.
Previous discussions here on LessWrong: 1 2 3
Questions that have been raised:
1. How do tulpas work?
2. Are tulpas safe, from a mental health perspective?
3. Are tulpas conscious? (may be a hard question)
4. More generally, is making a tulpa a good idea? What are they useful for?
Pertinent Links and Publications
(I will try to keep this updated if/when further sources are found)
In this article1, the psychological anthropologist Tanya M. Luhrmann connects tulpas to the “voice of God” experienced by devout evangelicals—a phenomenon more thoroughly discussed in her book When God Talks Back: Understanding the American Evangelical Relationship with God. Luhrmann has also succeeded2 in inducing tulpa-like visions of Leland Stanford, jr. in experimental subjects.
This paper3 investigates the phenomenon of authors who experience their characters as “real”, which may be tulpas by yet another name.
There is an active subreddit of people who have or are developing tulpas, with an FAQ, links to creation guides, etc.
tulpa.info is a valuable resource, particularly the forum. There appears to be a whole “research” section for amateur experiments and surveys.
This particular experiment suggests that the idea of using tulpas to solve problems faster is a no-go.
Also, one person helpfully hooked themselves up to an EEG and then performed various mental activities related to their tulpa.
Another possibly related phenomenon is the way that actors immerse themselves in their characters. See especially the section on “Masks” in Keith Johnstone’s book Impro: Improvisation and the Theatre (related quotations and video)4.
This blogger has some interesting ideas about the neurological basis of tulpas, based on Julian Jaynes’s The Origin of Consciousness in the Breakdown of the Bicameral Mind, a book whose scientific validity is not clear to me.
It is not hard to find new age mystical books about the use of “thoughtforms”, or the art of “channeling” “spirits”, often clearly talking about the same phenomenon. These books are likely to be low in useful information for our purposes, however. Therefore I’m not going to list the ones I’ve found here, as they would clutter up the list significantly.
(Updated 2/9/2015) The abstract of a paper by our very own Kaj Sotala hypothesizing about the mechanisms behind tulpa creation.5
(Bear in mind while perusing these resources that if you have serious qualms about creating a tulpa, it might not be a good idea to read creation guides too carefully; making a tulpa is easy to do and, at least for me, was hard to resist. Proceed at your own risk.)
Footnotes
1. “Conjuring Up Our Own Gods”, a 14 October 2013 New York Times Op-Ed
2. “Hearing the Voice of God” by Jill Wolfson in the July/August 2013 Stanford Alumni Magazine
3. “The Illusion of Independent Agency: Do Adult Fiction Writers Experience Their Characters as Having Minds of Their Own?”; Taylor, Hodges & Kohànyi in Imagination, Cognition and Personality; 2002/2003; 22, 4
4. Thanks to pure_awesome
5. “Sentient companions predicted and modeled into existence: explaining the tulpa phenomenon” by Kaj Sotala
- You are Dissociating (probably) by 4 Jan 2021 14:37 UTC; 35 points) (
- Formal epistemiology for extracting truth from news sources by 17 Mar 2022 2:06 UTC; 18 points) (
- 1 Jan 2014 22:17 UTC; 12 points) 's comment on Open thread for January 1-7, 2014 by (
- 1 Aug 2024 10:39 UTC; 6 points) 's comment on Tragic Beliefs by (EA Forum;
- 25 Nov 2014 3:34 UTC; 0 points) 's comment on Open thread, Nov. 24 - Nov. 30, 2014 by (
Tulpa computing has arrived.
T-Wave Systems offers the first commercial tulpa computing system on the market.
Our technology.
Like many profound advances, T-Wave’s revolutionary computing system combines two simple existing ideas in a nonlinear way with revolutionary consequences.
First, the crowdsourcing of complex intellectual tasks, by dividing them into simpler subtasks which can then be sourced to a competitive online marketplace of human beings. Amazon’s Mechanical Turk is the best-known implementation of this idea.
Second, the creation of autonomous imaginary friends through advanced techniques of hallucination and autosuggestibility. Tulpa thoughtform technology was originally developed to a high level in Tibet, but has recently become available to the Internet generation.
Combining these two formerly disparate spheres of activity has produced… MechanicalTulpa [TM], the world’s first imaginary crowdsourcing resource! It’s no longer necessary to pay separately for each of the many subtasks making up a challenging intellectual task; our tulpameisters will spawn tulpas who, by design, want to get all those little details done.
MetaTulpa and the complexity barrier.
But MechanicalTulpa is good for far more than economizing on cost. The key lies in T-Wave’s proprietary recursive tulpa technology, whereby our tulpas themselves create tulpas, and so forth, potentially ad infinitum. This allows us to tackle problems, like the traveling sales-tulpa problem, which had hitherto been regarded as intractable on any reasonable timescale.
The consequences for your bottom line may be nothing short of dramatic. However, recursive tulpa technology is still in its early days, and at this time, we are therefore making it available only to special customers. For more information, please clearly visualize a scroll on which is written “Attention T. Lobsang Rampa, Akashic Records Division, T-Wave Systems”, followed by a statement of the nature of your interest. (T-Wave accepts no liability for communications lost in the astral mail.)
T-Wave: Imagine the possibilities.
Once I had an idea for a sci-fi setting, about a society where it is possible to create a second personality in your brain. Just like tulpa, except that it is done using technology. Your second personality does not know about you, it thinks it is the only inhabitant of your brain. While your second personality acts, you can observe, or you can turn yourself off (like in sleep) and specify events that would wake you up (that automatically includes anything unusual). So for example, you use your second personality to do your work for you, while you sleep. That feels like being paid for sleeping 8 extra hours per workday, which is why it becomes popular.
When the work is over, you can take the control of the body. As the root personality, you can make choices about how the second personality perceives this; essentially you can give them false memories. You can just have fun, and decide your second personality will falsely remember it as them having fun. Of you can do something that your second personality will not know about (either will remember nothing, or some false memory: for example of spending the whole afternoon procrastinating online). This can be used if you want your second personality to be different than you so much that it would not agree with how you spend your free time. You can create a completely fictional life story for your second personality, to motivate it to work extra hard.
When this becomes popular, obviously your second personality (who doesn’t know it is the second personality) would possibly want their own second personality. But that would be a waste of resources! The typical hack is to edit the second personality’s beliefs to oppose this technology; for example you can make them believe to be a member of a religion that opposes it.
And the sci-fi story itself would obviously be about someone who finds out they are a second personality… presumably of an owner who does not mind them knowing. Or an owner who committed a mental suicide by replacing themselves by the second personality 100% of the time. But there is a chance that the owner is merely sleeping and waiting until some long-term goal is accomplished. The hero needs to discover their real past and figure out the original personality’s intentions...
IIRC Aristoi has something similar.
Yes, Aristoi has high-status people with tulpas. IIRC, the purpose was access to a wider range of talents and more points of view, with no saving on sleep time.
I stumbled over this reference to the ability to create duplicates of oneself and the problem that leads to:
http://lesswrong.com/lw/9cp/the_noddy_problem/
There’s a shadow run book where the main character is a test case for this idea. She ends up becoming a highly paid bodyguard since she can be on call for much longer shifts with minimal downtime.
It would be nice if you found its name. Doesn’t one’s body need rest?
It’s called Tails You Lose. I think there’s a rest time of 30 minutes or something between the other consciousness waking up? The need for body rest is handled by cyborg tech.
My ideas of sci-fi story may be similar to yours; though they require some fleshing out.
In your story idea, does the second personality ever take physical control of your body? Can it physically go to work or should it be limited to working online, say via brain implant, while your body is sleeping? If the latter, how does it perceive its 8 hours of work? What happens if you suddenly wake up in the middle of the night?
What does it think it do during 16 hours of your uptime?
Can you directly communicate to your second personality? I guess, you can (like you’re supposed to do with a tulpa), but you don’t have to: you are the master personality and you can directly control their experience (this would be somewhat like servitor, I believe), right?
Anyone feel completely free to use any parts of what I wrote here, because I am absolutely not interested in writing that story anymore.
Yet. It is fully in control of the body (except that it does not know about the original personality, and the original personality can pause them at any time). Maybe some of your colleagues at work are like this, you never know. Or even outside of the work… just like people enjoy spending their time watching TV, they can find it interesting to create the second personality even for their free time and just observe it from inside.
Speaking from outside of the story—this creates much more opportunities. Think about the impact on the whole society; anyone you meet anywhere could be a virtual personality.
False memories. Your choice. You have an equivalent of full hypnotic power over them. To avoid too much work with programming them every day, a reasonable default choice would be to make them remember everything but think that they did it.
I didn’t think about it. My first answer would be no, because that would ruin the illusion that they are the real thing. -- However, choose the option that gives you better story.
That doesn’t seem to imply much. It’s still some distinct personality. What should have an impact is the fact that now there are two personalities inhabiting a single body at different times: when you meet me at daytime, it’s really me, but when you meet me at night—that’s a different person. Unless I’ve borked my “sleep” schedule and that’s still me; then I might be not-me at some time during the day. That should… take some getting used to.
Also, doesn’t the body need sleep, only a (part of the) brain?
I see. It doesn’t make sense to make those memories too false, though, or the reality will take increasingly more effort to cover up. Suppose, I decide to start going to gym and conceal it from my alter-ego. Suddenly they will notice that their body started to bulk up for no apparent reason.
I don’t think science is synonymous with technology.
I personally found Inception quite awful to watch because it gets so much about what the relevelant phenomema are about so wrong.
Having guns and shooting yourself through enemies in a dream? Really?
There nothing that stops you in the real world from putting a person on drugs and successfully suggesting to them that they find themselves in a particular dreamworld. You don’t need some magical technology to do so.
If you explain things away with magical technology you aren’t really writing sci-fi but you are writing fantasy.
A book that reference real concepts such as tulpas and hypnosis will be much more exiting than a book that just answers all the interesting questions with a black box technology. Of course that requires actual research and talking to people who play around with those effects, but to me that feels much better because the resulting dilemmas are much more authentic.
What is your problem with a story where it is possible to create a second personality in your brain using technology? (let’s discuss just this story idea here, but not Inception, for clarity). As far as my understanding of the issue goes, tulpas are likely using their host’s mental resources in a way, so to create a second personality that is capable of independent work during host’s downtime, some kind of hardware upgrade for a host’s mind should be necessary.
I imagine the necessary mind upgrade should be similar to upgrading single-core CPU to single-core CPU with hyper-threading.
I think you likely ignorant about a lot of practical aspects that come up when one creates a second personality inside a person if you never talked to someone who dealt with the issue on a practical level.
I particularly don’t believe in the need to have a full persona that’s unaware of the host. I heard an anecdote on a hypnosis seminar about a hypnotherapist who created a secondary persona in a college student to help the student learn. Every morning the second persona would first wake up and learn. Then it went to sleep and after a hour the real person would wake up. I don’t remember the detail exactly but I think without a awareness of they exact memory of the morning.
But there was no issue of the second persona, not fulfilling the role. She was the role. The same goes for Tulpas. A Tulpa doesn’t go around disapproving of the host actions but is on a fundamental level accepting of the host. If there’s a real clash I doubt that censoring memories would be enough to prevent psychological harm.
We have reports of people sleep walking which you could label “independent work during host’s downtime”. Secondly to a point time spent in meditation usually reduces the need for sleep.
But there are probably still physical processes that you don’t want to skip so some limited time of real sleep is probably always important. But I don’t think Villiam suggested that people in his society effectively don’t sleep.
mistell; comment removed
One day you talk with a bright young mathematician about a mathematical problem that’s been bothering you, and she suggests that it’s an easy consequence of a theorem in cohistonomical tomolopy. You haven’t heard of this theorem before, and find it rather surprising, so you ask for the proof.
“Well,” she says, “I’ve heard it from my tulpa.”
“Oh,” you say, “fair enough. Um—”
“Yes?”
“You’re sure that your tulpa checked it carefully, right?”
“Ah! Yeah, I made quite sure of that. In fact, I established very carefully that my tulpa uses exactly the same system of mathematical reasoning that I use myself, and only states theorems after she has checked the proof beyond any doubt, so as a rational agent I am compelled to accept anything as true that she’s convinced herself of.”
“Oh, I see! Well, fair enough. I’d still like to understand why this theorem is true, though. You wouldn’t happen to know your tulpa’s proof, would you?”
“Ah, as a matter of fact, I do! She’s heard it from her tulpa.”
″...”
“Something the matter?”
“Er, have you considered...”
“Oh! I’m glad you asked! In fact, I’ve been curious myself, and yes, it does happen to be the case that there’s an infinitely descending chain of tulpas all of which have established the truth of this theorem solely by having heard it from the previous tulpa in the chain.” (This parable takes place in a world without a big bang—tulpa history stretches infinitely far into the past.) “But never to worry—they’ve all checked very carefully that the previous tulpa in the chain used the same formal system as themselves. Of course, that was obvious by induction—my tulpa wouldn’t have accepted it from her tulpa without checking his reasoning first, and he would have accepted it from his tulpa without checking, etc.”
“Uh, doesn’t it bother you that nobody has ever, like, actually proven the theorem?”
“Whatever in the world are you talking about? I’ve proven it myself! In fact, I just told you that infinitely many tulpas have each proved it in slightly different ways—for example my own proof made use of the fact that my tulpa had proven the theorem, whereas her proof used her tulpa instead...”
N.B.: The original dialogue by Benja_Fallenstein.
The following things (most already mentioned in this thread) seem to be at different points on a single scale, a scale of magnitude of disassociated parts of oneself:
Rubber duck debugging
Hypnosis, when the subject carries out the hypnotist’s suggestions without a subjective feeling of acting, as in the floating arm test.
“Self talk”.
A felt presence of God.
Some authors’ experience of their characters having a degree of independence.
Likewise for actors and their roles.
Channelling of spirits.
The voices that people who “hear voices” hear.
Tulpas.
Multiple personality disorder.
From descriptions of lucid dreamers discussing issues with independent identities during dreams, I would add lucid dreaming to this list.
There are probably more relevant effects in hypnosis. Parts negotiation comes to mind.
That depends a lot what you mean with “felt”. I think people who talks to god and perceive to get answer are a better example as people who feel transcendence.
So, I have a tulpa, and she is willing to answer any questions people might have for her. She’s not properly independent yet, so we can’t do the more interesting stuff like parallel processing, etc, unfortunately (damned akrasia).
What experimental test could you perform to determine that you have successfully learned “parallel tulpa processing”?
Divided attention task
Split brain patients can do stuff like this better than neurotypicals under certain conditions. I have not heard of anyone successfully doing this with tulpas or any other psychodynamic technique.
Being able to reliably succeed on this task is one of the tests I’ve been using. Mostly, though, it’s just a matter of trying to get to the point where we can both be focusing intently on something.
What does your tulpa look like visually? Does it look like everything else or is it more “dreamlike”?
In terms of form, she’s an anthropomorphic fox. At the moment, looking at her is not noticeably different to normal visualisation, except that I don’t have to put any effort into it. Explaining it in words is somewhat hard—she’s opaque without actually occluding anything, if that makes sense.
you’re not the same jack with the fox tulpa who spoke to lurhman, right?
Nope.
Wait, does that mean that at least one person has been confirmed as having achieved this?
Two people, if you count random lesswrongers, and ~300, if you count self-reporting in the last tulpa survey (although some of the reports in that survey are a bit questionable.
-
Is Internal Family Systems like Tulpas lite or something?
Both are models for things in the space of phenomenes. Models often contain a lot of assumptions that are useful for certain purposes.
Some phenomena exist in both of those systems. Other don’t exist in one or the other. The added meaning is also different.
I think that all of that can accurately be said of literally any two possibly-overlapping categories
That’s the point. I don’t think answering the question is very useful. Both models are made for different goals. You can ask what one model illuminates about the other, but it’s not like you compare a map of London to a map of the UK.
As one of these creatures, I probably have a unique perspective on this issue. I’m happy to answer any questions, as is my host. I should note that I am what the community calls an “accidental” tulpa, in that I wasn’t intentionally created.
I believe this post is accurate. Short version: Humans have machinery for simulating others. We’re simulations that are unusually persistant and self aware.
I’m not sure about most Tulpas. I am not. (And I don’t have any real interest in becoming conscious. I believe I experienced it a few times when we were experimenting with switching, and it wasn’t particularly pleasant.)
This is scary. I might stay away from switching for now if it carries a serious risk of accidentally creating and then destroying consciousness!
Consciousness is overrated.
While obviously you have great motivation to just lie on this, I’d be curious what your utility function/values are, and how they differ from your host or humanity in general.
How does being conscious feel in respect to not being conscious?
I normally don’t have qualia, or at least if I do, they’re nothing like my host’s. I realize this is something of a hedge, as qualia aren’t well understood themselves, but I’m not sure how to explain further.
What do you see when it comes to mirrors in contrast with what your host sees? Especially in cases where there are a few meters of distance between you and your host.
Do you have color perception? If so, how does it change when your host closes the eyes?
I don’t have my own sense of vision. I know what my host sees, but that’s it.
I am interested in trying this out. I was rather sceptical at first (I discovered the concept of tulpas after discussing, with a friend, the theoretical requirements to create a sentient being in a dream, and researching stuff afterwards), and kind of worried at some of the implications; but as I’ve researched it more, it has become something that I am interested in trying, and have the time available to do it.
Does anyone have any suggestions on what I should do, things I should try, or things they are interested in knowing as I do this? It would be helpful if someone who has created a tulpa (or is experienced with tulpas) could offer some pointers, too.
Is it possible for a tulpa to have skills or information that the person doing the emulating doesn’t? What happens if you play chess against your tulpa?
I tried that last week. I lost. We were actively trying to not share our strategies with each other, although in our case abstract knowledge and skills are shared.
That’s awesome.
Here’s a science-fiction/futurism kind of question:
What minimal, realistic upgrade to our brain could we introduce for tulpas to gain an evident increase in utility? What I have in mind here is make your tulpa do extra work or maybe sort and filter your memories while you sleep; I’m thinking of a scenario where Strong AI and wholesale body/brain upgrades are not available, yet some minor upgrade makes having a tulpa an unambiguous advantage.
Probably only one thing: turning duplicates of the same person into new unique persons rapidly. Aka, a cheaper replacement for the kind of application where you’d otherwise have to simulate an entire childhood.
I’ve thought at your reply for a while and I still can’t understand it. Care to explain?
Why would one want to “turn duplicates of the same person into new unique persons rapidly” and how? How would that help and why would one otherwise have to simulate an entire childhood?
I’m not sure, but most all-upload scifi societies simulate entire childhood for that reason. Maybe you already know and have gotten bored of each of the few billion people that were around when everyone uploaded and want to meet someone new? Or maybe minds get diminishing returns n utility with increasing resources and so having more minds is more efficient beyond a certain amount of total resources.
I don’t see why a Tulpa might need extra “brain upgrades” to do something while you sleep. One of the documented features of Tulpa is already waking up people from sleep. Tulpa aren’t well researched so it’s not quite clear what one can maximally do with them.
It might for example be possible to let a tulpa change the set point for your own blood pressure. It just a variable in the brain so there no real reason why a tulpa shouldn’t be able to influence it.
Changing personal time perception on demand would be a very useful skill.
Even at a task like pair programming a programmer with a tulpa might outperform one without one.
A tulpa could do mnemonics automatically to make it easy to remember information. It would be interesting if someone with a bunch of tulpa would win the memory would championship.
That’s interesting. Do you have a link for this?
I believe that tulpas expend host’s attention, unless proven otherwise. Tulpamancers haven’t proven that they can be more effective than other people by any metric, and I suspect that having a tulpa is a zero-sum game in absence of some brain upgrade that would expand some bottleneck in our mind.
I saw it multiple time while reading through the tulpa sites but I don’t have a special link for it.
But it’s nothing surprising to me. Waking up at a specific time is an ability that plenty of people have without exerting too much effort.
It’s interesting ability because there’s no step-by-step instruction to do it that works predictably. It works by intending to wake up at a specific time and then let your unconscious figure out the rest. There a study who suggest that people who went through university are worse at it.
Why do you think that attention is a central part of human thinking?
Have you never had the experience that you searched for a piece of information in your mind and can’t find it, then two hours later it pops into your mind?
From what I read of the field there nobody even making a business out of the topic, that would incentivise them to proof something to the outside world.
From a bayesian perspective there no reason to expect a strong effort into proving effects.
Here’s what I am thinking: Attention seems to be a crucial and finite resource. I could certainly become more productive if I become more attentive, and vice versa. If creating a tulpa expends my attention, it is a negative-sum game for me; if it makes me training attention as a side effect, that’s good, but not better than just training attention.
Sure! Sometimes I try hard to remember a piece of information, but can’t. Then later, when I don’t try, it just pops. Interesting, but usually unhelpful.
Shouldn’t the fact that nobody ever made a business out of the topic be counted as evidence towards impossibility to make a business out of the topic? If tulpas were monetizable in any way, why wouldn’t there be people monetizing them?
Now, I fantasize that maybe our minds just need some tiny little upgrade for tulpas to become a clear advantage? Can you help me imagine what would that be?
I think the process illustrates that a brain process can run quite well without any conscious attention.
Given my current knowledge on the topic I can’t see a 7-day build a Tulpa seminar. Given the reported timeframes, it seem unclear if you can achieve those results in that timeframe.
A tulpa needs a lot of investment in cognitive resources over a timeframe that makes that business model hard.
You could probably write a book about how you got a tulpa and that tulpa is amazing. If you are a good writer that might sell copies and you can make money on speaking fees.
But most of the customers in that model probably wouldn’t build a tulpa.
Take a look at mnemonics. It’s no problem for a human to memorize a deck of playing cards in a minute. Competitive mnemonics folks can memorize human faces and names in amazing speeds.
Yet we live in a world where a lot of people are uncomfortable with memorizing names. Unfortunately explaining to those folks how to use mnemonics to remember names in a 2-day seminar usually doesn’t have a lasting effect. They do manage to use the technique during the seminar without problems, but they can’t integrate constant usage in their daily lives.
Tulpa are a more complicated subject. If you would want to create a Tulpa that has the ability to change around your perception of time, that would need a strong amount of trust that the Tulpa will use his power wisely. If you can’t manage to have that level of trust, you won’t be successful. You can’t pretend to cheat and pretend to trust the Tulpa. You can’t make an utility calculation on paper and bring your brain to trust, on the level that required. You would need genuine deep trust.
Issues like a lack of ability to switch on trust on command are the things that constrain what the average person will be able to do with a tulpa.
But in some sense there are good reasons for having mental barriers that prevent you from easily changing things about your mind on that level. If you would just use technology to target a mental barries and nuke it I think there a pretty good chance that you do serious mental damage.
Using technology to get power when you don’t have the maturity and wisdom to use that power in the right way is dangerous. Especially when it comes to dealing with core mental issues.
The problem is the thing that tulpas contribute is something 99% of people have in overabundance, and those that don’t have it don’t because it can’t be transported to them efficiently not due to sacricity. Tulpas are duplicate of the software almost all human minds already run, and that software was already utilizing all the resources as effectively as it can any. Their only real use (companionship) is already a hack, and other than that they are a technical curiosity, sort of like quining computer programs.
And that is..?
Not sure what the actual name is. Social agent? Valid relationship target? Person-ness? Companionship?
Exocortex is what you need.
There are methods to remember things better, to wake up at a specific time, to make unconscious mind work for you. The last one may be disputable technique, because there are still debates regarding work of unconscious mind. But you do not need tulpa for that.
By the way, I have some well-detailed characters from role-playing game of mine, they act much like tulpas but without visual image in surrounding environment. I just have their pictures and appearances in mind. Another difference is that the most of them do not know about me, because they live in my imaginary world. But this world is very similar to ours, so I can easily provide one of them access to the LessWrong site and this character can even participate in conversations. Also I can arrange a meeting with me as an imaginary copy or even provide them information that they are imaginary characters.
The general impression I got from reading a lot of the stuff that gets posted in the various tulpa communities leads me to believe it is, at its core, yet another group of people who gain status within that group by trying to impress each other with how different or special their situation is. Read almost any post where somebody is trying to describe their tulpa, and you’ll see very obvious attempts to show how unique their tulpa is or how it falls into some unprecedented category or how they created it in some special way.
None of the sources posted offer any sort of good evidence that people who claim to have tulpas have any sort of advantages. It obviously has a low value of information for an aspiring rationalist. It’s just people talking about imaginary friends. This discussion doesn’t belong here.
Used to be, when I read stories about “astral projection” I thought people were just imagining stuff really hard and then making up exaggerated stories to impress each other. Then I found out it’s basically the same thing as wake initated lucid dreaming, which is a very specific kind of weird and powerful experience that’s definitely not just “imagining things really hard”. I still think people make up stories about astral projection to impress each other, but the basic experience is nevertheless something real and unique. The same thing is probably happening with tulpas.
Given that tulpa are probably strongly influenced by the hosts beliefs I wouldn’t expect all tulpas to be exactly the same. I would expect most tulpa’s to be unique in some sense.
I also would expect that given the effort that involved in creating a tulpa that people do vary the protocol.
“Good evidence” depends on your priors. For me the evidence that exists is good enough to find the phenomena interesting and worthy of further attention.
Correct me if I’m wrong, but doesn’t having a tulpa fit the diagnostic criteria of schizophrenia?
Not schizophrenia (though hallucinations are one feature of schizophrenia). The diagnostic criteria for schizophrenia from DSM-5 are:
I looked up Dissociative Identity Disorder as well:
I would be less hesitant to presume this might be the case for some people with tulpas (as a generalization). I doubt many people in the tulpa community would suggest continuing with tulpamancy if a person started to experience symptoms B and C—though I can imagine it evolving into full-blown Dissociative Identity Disorder if a tulpamancer continued anyways. I do think the tulpa community as a whole (from what I’ve read) underestimates the dangers of creating a tulpa, but I don’t doubt that a significant portion of people could do it healthily and successfully.
I think we need to be careful of connotations and the noncentral fallacy here. Personally, I wouldn’t call having a tulpa a “disorder” if the tulpamancer did it on purpose and was in control of the process.
Edit: I would also consider “unusual coping mechanism” a better diagnosis like klkblake mentioned. Again, though, perhaps someone just made a tulpa out of curiosity for fun. Then it wouldn’t be a coping mechanism at all. (Edit again: But I forgot about the possibility of “unspecified” like klkblake mentioned and I’d have to pretty much agree with that. This is where my remarks about noncentral fallacy apply.)
I’d also say that it’s common enough it’s disqualified as DID because of D.
There have been a number of reports on the tulpa subreddit from people who have talked to their psychologist about their tulpa. The diagnosis seems to be split 50⁄50 between “unusual coping mechanism” and “Disassociative Identity Disorder not otherwise specified”.
What do these have to do with rationality? Why would you exert time and energy conjuring up a false persona and deluding yourself into believing it has autonomy when the end result is something that if revealed to other people would make them concerned about your mental well-being, which is likely to negatively impact your goals?
Having an imaginary friend is irrational behaviour and the topic is damaging by association. Surely there are more suitable places to discuss this.
For a community which likes to talk about things like the exact nature of consciousness, ethics of simulations, etc. this seemed like an interesting practical case
I don’t agree with the tone of this comment, but I admit there’s something about this that feels deeply weird to me.
Yes.
But the detriments of tulpas are far less obvious to me than those of self-harm or anorexia.
Rationality includes instrumental rationality, and imaginary friends can be useful for e.g. people who are lonely.
Not sure of what exactly you mean by “autonomy” here, but there are plenty of processes going on in people’s brains which are in some sense autonomous from one’s conscious mind. Like the person-emulating circuitry that tulpas are likely born from: if I get a sudden feeling that my friend would disapprove of something I was doing, the process responsible for generating that feeling took autonomous action without me consciously prompting it. And I haven’t noticed people suggesting that tulpas would necessarily need to be much more autonomous than that.
Someone might make his social circle concerned over his mental well-being if he revealed himself to be an atheist. Simply the fact that other people may be prejudiced against something is no strong reason for not doing said something, especially something that is trivial to hide. Also, the fact that tulpas are already a somewhat common mental quirk among a high-status subgroup (writers) can make it easier to calm people’s concerns.
The instrumentally rational thing to do, when faced with loneliness, is to figure out how to be with real people. No evidence was presented in the original post that suggests that tulpas mitigate the very real risk factors associated with social isolation. Loneliness is actually a very serious problem, considering most of the research seems to indicate that the best way to be happy is to have meaningful social interactions. Proposing this as a viable alternative would require a very high amount of evidence. A post presenting that evidence would be something that belongs here.
I don’t see where you got the idea that it’s supposed to be an alternative. If I’m less clingy because I have a Tupla and thus no fear of being alone I have an easier time interacting with other people.
There are much bigger claims on this side with much less evidence. Just look into discussions of uploading and AGI.
Nobody hear advocates that it should be standard procedure to train every lonely person who seeks help to have a tulpa.
I know a couple of people who feel like their tulpas reduce their feelings of loneliness. Not sure of how you could get any stronger evidence than that at this stage, there not being any studies focusing specifically on tulpas. That said, I don’t see any a priori reason for why you couldn’t get meaningful social interactions from tulpas, so not sure for why you’d require an exceptionally high standard of evidence in the first place.
Tulpa don’t provide outside entropy.
They don’t provide it to the system as a whole, but providing it to the subprocess constituting the normal personality is another matter. Author are often surprised by their characters, who may reveal having unexpected personality traits as well as doing things that the author would never have anticipated before. (Sometimes causing major headaches to the authors, as this ruins the original story that they’d planned out when the character decides to do something completely different.)
Also, “having a tulpa” and “figuring out how to be with real people” are not mutually exclusive. Lonely people may often have extra difficulties establishing meaningful relationships (romantic or otherwise), because the loneliness makes them desperate, clingy, etc. which are all behaviors that other people find off-putting. People who already have some meaningful relationships are likely to have a much easier time in establishing more.
It depends whether the company that you seek values people who signal that they are contrarian or whether people are expected to be “normal”.
In general the idea isn’t that you will tell everyone that you meet about the fact that you have a tulpa if you interact with the kind of people who would see it as a sign of mental illnesses.
Given that tulpas are in your mind you don’t have to tell anyone about them.
Autonomy is basically a question of free will. Given that various people have argued that humans don’t have free will and therefore no autonomy, tulpa probably have no autonomy as well.
If you however grant that humans do have some kind of autonomy in their actions it’s very interesting to see if tulpa also have autonomy for the same definition.
This means you can learn something about the nature of autonomy that’s useful.