A tulpa is an “imaginary friend” (a vivid hallucination of an external consciousness) created through intense prolonged visualization/practice (about an hour a day for two months). People who claim to have created tulpas say that the hallucination looks and sounds realistic. Some claim that the tulpa can remember things they’ve consciously forgotten or is better than them at mental math.
Not sure whether this is actually possible (I’d guess it would be basically impossible for the 3% of people who are incapable of mental imagery, for instance); many people on the subreddit are unreliable, such as occult enthusiasts (who believe in magick and think that tulpas are more than just hallucinations) and 13-year-old boys.
If this is real, there’s probably some way of using this to develop skills faster or become more productive.
As someone with a tulpa, I figure I should probably share my experiences. Vigil has been around since I was 11 or 12, so I can’t effectively compare my abilities before and after he showed up.
He has dedicated himself to improving our rationality, and has been a substantial help in pointing out fallacies in my thinking. However, we’re skeptical that this is anything a more traditional inner monologue wouldn’t figure out. The biggest apparent benefit is that being a tulpa allows him a greater degree of mental flexibility than me, making it easier for him to point out and avoid motivated thinking. Unfortunately, we haven’t found a way to test this.
I’m afraid he doesn’t know any “tricks” like accessing subconscious thoughts or super math skills.
While Vigil has been around for over a decade, I only found out about the tulpa community very recently, so I know very little about it. I also don’t know anything about creating them intentionally, he just showed up one day.
If you have any questions for me or him, we’re happy to answer.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself “Dijkstra would not have liked this”, well, that would be enough immortality for me.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.
That’s a good idea, thanks.
Note that my host’s posting has significant input from me, so this account is only likely to be used for disagreements and things addressed specifically to me.
...many people argue for (their) god by pointing out that they are often “feeling his presence” and since many claim to speak with him as well, maybe that’s really just one form of tupla without the insight that it is actually a hallucination.
Surely that’s not how most people experience belief, but I never really considered that some of them might actually carry around a vivid invisible (or visible for all I know) hallucination quite like that. Could explain why some of the really batshit crazy ones going on about how god constantly speaks to them manage to be quite so convincing.
From now on my two tulpa buddies will be Eliezer and an artificial intelligence engaged in constant conversation while I make toast, love, and take a shower. Too bad they’ll never be smarter than me though.
I’ve had paracosms since before he was around, and we go to those sometimes. I’ve also got a “peaceful place” that I use to collect myself, but I use it much more than he does.
I would think there should be a general warning against deliberately promoting the effects of dissociative identity disorder etc, without adequate medical supervision.
I really doubt that tulpas have much to do with DID, or with anything dangerous for that matter. Based on my admittedly anecdotal experience, a milder version of having them is at least somewhat common among writers and role-players, who say that they’re able to talk to the fictional characters they’ve created. The people in question seem… well, as sane as you get when talking about strongly creative people. An even milder version, where the character you’re writing or role-playing just takes a life of their own and acts in a completely unanticipated manner, but one that’s consistent with their personality, is even more common, and I’ve personally experienced it many times. Once the character is well-formed enough, it just feels “wrong” to make them act in some particular manner that goes against their personality, and if you force them to do it anyway you’ll feel bad and guilty afterwards.
I would presume that tulpas are nothing but our normal person-emulation circuitry acting somewhat more strongly than usual. You know those situations where you can guess what your friend would say in response to some comment, or when you feel guilty about doing something that somebody important to you would disapprove of? Same principle, quite probably.
This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:
The illusion of independent agency (IIS) occurs when a fictional character is experienced by the person who created it as having independent thoughts, words, and/or actions. Children often report this sort of independence in their descriptions of imaginary companions. This study investigated the extent to which adult writers experience IIA with the characters they create for their works of fiction. Fifty fiction writers were interviewed about the development of their characters and their memories for childhood imaginary companions. Ninety-two percent of the writers reported at least some experience of IIA. The writers who had published their work had more frequent and detailed reports of IIA, suggesting that the illusion could be related to expertise. As a group, the writers scored higher than population norms in empathy, dissociation, and memories for childhood imaginary companions.
The range of intensities reported by the writers seems to match up with the reports in r/Tulpas, so I think it’s safe to say that it is the same phenomena, albeit achieved via slightly different means.
Some interesting parts from the paper regarding dissociative disorder:
The subjects completed the Dissociative Experiences Scale, which yields an overall score, as well as scores on three subscales:
Absorption and changeability: people’s tendency to become highly engrossed in activities (items such as “Some people find that they become so involved in a fantasy or daydream that it feels as though it were really happening to them).
Amnestic experiences: the degree to which dissociation causes gaps in episodic memory (“Some people have the experience of finding things among their belongings that they do not remember buying”).
Derealisation and depersonalisation: things like “Some people sometimes have the experience of feeling that their body does not belong to them”.
The subjects scored an overall mean score of 18.52 (SD 16.07), whereas the general population score a mean of 7.8, and a group of schizophrenics scored 17.7. Scores of 30 are a commonly used cutoff for “normal” scores. Seven subjects exceeded this threshold. The mean scores for the subscales were:
Absorption and changeability: 26.22 (SD 14.65).
Amnestic experiences: 6.80 (SD 8.30).
Derealisation and depersonalisation: 7.84 (SD 7.39).
The latter two subscales are considered particularly diagnostic of dissociative disorders, and the subjects did not differ from the population norms on these. They each had only one subject score over 30 (not the same subject).
What I draw from this: Tulpas are the same phenomenon as writers interacting with their characters. Creating tulpas doesn’t cause other symptoms associated with dissociative disorders. There shouldn’t be any harmful long-term effects (if there were, we should have noticed them in writers). That said, there are some interactions that some people have with their tulpas that are outside the range (to my knowledge) of what writers do:
Possession
Switching
Merging
The tulpa community generally endorses the first two as being safe, and claims the last to be horribly dangerous and reliably ending in insanity and/or death. I suspect the first one would be safe, but would not recommend trying any of them without more information.
(Note: This is not my field, and I have little experience with interpreting research results. Grains of salt, etc.)
Very few people have actually managed switching, from what I have read. I personally do not recommend it, but I am somewhat biased on that topic.
Merging is a term I’ve rarely heard. Perhaps it is favored by the more metaphysically minded? I’ve not heard good reports of this, and all I have heard of “merging” was a very few individuals well known to be internet trolls on 4chan.
Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.
I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.
Thank you for the references. Whilst switching may indeed be relatively common among people who have had their tulpas for a long while, the actual numbers are still small − 44 according to a recent census .
Ah, so merging is some sort of forming a gestalt personality? I’ve no evidence to offer, only stuff I’ve read that I find the authors somewhat questionable sources.
As someone who both successfully experimented with tulpa creation in his youth, and who has since developed various mental disorders (mostly neuroticisms involving power- and status-mediated social realities), I would strongly second this warning. Correlation isn’t causation, of course, but at the very least I’ve learned to adjust my priors upwards regarding the idea that Crowley-style magickal experimentation can be psychologically damaging.
I think tulpas are more like schizophrenia than dissociative identity disorder. But now that you mention it, dissociative identity disorder does look like fertile ground for finding more munchkinly ideas.
For instance, at least one person I know has admitted to mentally pretending to be another person I know in order to be more extroverted. Maybe this could be combined with tulpas, say by visualizing/hallucinating that you’re being possessed by a tulpa.
I’ve always pretended to be in order to get whatever skill I’ve needed. I just call it “putting on hats”. I learned to dance by pretending to be a dancer, I learned to sing by pretending to be a singer. When I teach, I pretend to be a teacher, and when I lead I pretend to be a leader (these last two actually came a lot easier to me when I was teaching hooping than now when I’m teaching rationality stuffs, and I haven’t really sat down to figure out why. I probably should though, because I am significantly better at when I can pretend to be it. And I highly value being better at these specific skills right now.)
I had always thought everyone did this, but now I see I might be generalizing from one example.
I learnt skills in high-school acting class that I use daily in my job as a teacher. It would be a little much to say that I’m method acting when I teach —I am a teacher in real life, after all—, but my personality is noticeably different (more extroverted, for one thing). It’s draining, however; that’s the downside.
Technically, making tulpa would be considered DDNOS, except that the new definition exempts shamanistic practices. Making tulpa is a shamanistic meditation technique practiced in Tibet for the purposes of self-discovery. It takes years of focused practice and concentration, but self-hypnosis can help some.
This modern resurgence of tulpas seems to be trying to find faster ways to make them, with less then years of effort. The evidence for success in this is so far anecdotal. I would advise caution—this is not something that would suit everyone.
I have made tulpas in the past. I’ve some that are decades old. I will say that seems to be rare so far. Also, in my observation, tulpas become odd after decades, acquiring just as many quirks as most humans have. I personally don’t think that there is as much risk of insanity as people think, but I would err on the side of caution myself.
It’s interesting that demons in computer science are called that way. They have exactly the same functionality as the demons that occult enthusiasts proclaim to use.
Even if you don’t believe in the occult, be aware that out culture has a lot of stories about how summoning demons might be a bad idea.
You are moving in territory where you don’t have mainstream psychology knowledge that guides you and shows you where the dangers lie.
You are left with a mental framework of occult defense against evil forces. It’s the only knowledge that you can access to guide that way.
Having to learn to protect yourself against evil spirits when you don’t believe in spirits is a quite messed up.
I had an experience where my arm moved around if I didn’t try to control it consciously after doing “spirit healing”. I didn’t believe in spirits and was fairly confident that it’s just my brain doing weird stuff. On the other hand I had to face the fact that the brain doing weird stuff might not be harmless. Fortunately the thing went away after a few month with the help of a person who called it a specter without me saying anything specific about it.
You can always say: “Well, it’s just my mind doing something strange.” At the same time it’s a hard confrontation.
Even if you don’t believe in the occult, be aware that out culture has a lot of stories about how summoning demons might be a bad idea.
Isn’t this more like, our (human) culture has a ton of instances when “summoning” “demons” is encouraged, and Christianity didn’t like it and so …demonized...it?
A lot of New Age folk put quite a lot of emphasis on respect and love instead of forcing entities to do something.
Asking a God for a favor isn’t the same thing as ordering an entity to do a particular task. Daemon’s get ordered to fulfill tasks.
If you look at those tulpa creation guides they basically say, treat your tulpa nicely and it will help you to the extend that it wants.
They advocate against manpulating the tulpa into doing what you want.
Really? From what I’ve read, The folks who claim that this “tulpa” stuff is possible to do also say that you can create “servitors”, which are not conscious and are basically portions of your mind that can perform mental tasks without distracting you.
I dunno...I really don’t understand why no one in this community has bothered to test this sort of thing. It’s fairly easy to make a test of divided attention to see if someone has successfully created a partially separate entity which can operate autonomously.
I don’t have a tulpa, and I tried the second test and was unable to keep track of both lines of dots; at best I could get one side perfectly and guess at the other side. If I create a tulpa at any point, I’ll check if that result changes.
ETA: I tried the second test again, but counted the red ones as 1,2,3,… and the blue ones as A,B,C,… then I calculated what number the letter corresponded to. I got an almost perfect score; so a tulpa is not necessary to do well on this test. I’m not sure what sort of test could rule out this method; I have seen a auditory test which was two simultaneous dual-n-back tests.
Yup—since posting that comment I actually checked with the tulpa community and they referred me to those very links. No data formally collected, but anecdotally people with tulpas aren’t reporting getting perfect scores.
I’m going with “use imagination, simulate personality” here, and am guessing any benefits relating to the tulpa are emotional and/or influencing what a person thinks about, rather than a separated neural network like what you’d get with a split brain or something.
The perceived inability to read the tulpa’s mind and the seemingly spontaneously complex nature of the tulpa’s voice is, I think, an artifact of our own inability to know what we think before we think it, similar to dream characters. As such, I don’t think there is any major distinction between a tulpa and a dream character, an imaginary friend, a character an author puts into a book, a deity being prayed too, and so on. That’s not to say tulpas are bs or uninteresting or anything—I’m sure they really can have personalities—it’s just that they aren’t distinct from various commonly experienced phenomenon that goes by other names. I don’t think I’d accord them moral status, beyond the psychological health of the “host”. (Although, I suspect to get a truly complex tulpa you have to believe it is a separate individual at some level—that’s how neurotypical people believe they can hear god’s voice and so on.)
I’ve got much respect to the community for empirically testing that hypotheses!
This is incredibly pedantic. (Also rather unjustified, due to my own lack of knowledge regarding occult enthusiasts.) However:
It’s interesting that demons in computer science are called that way. They have exactly the same functionality as the demons that occult enthusiasts proclaim to use.
Although daemons in computer science are rather akin to daemons in classical mythology (sort of, kind of, close enough), they really don’t particularly resemble our modern conception of demons. I mean, they can totally get a programmer into “Sorcerer’s Apprentice”-style shenanigans, but I’ve never heard of a daemon tempting anyone.
You can always say: “Well, it’s just my mind doing something strange.” At the same time it’s a hard confrontation.
I have previously recommend to friends that alcohol is a moderately good way to develop empathy for those less intelligent than oneself. (That is, it is a good way for those who really cannot comprehend the way other people get confused by certain ideas). I wager that there are a wide array of methods to gain knowledge of some of the stranger confusions the human mind is a capable of. Ignoring chemical means, sleep deprivation is probably the simplest.
Also, congratulations for going through these experiences and retaining (what I assume is) a coherent and rational belief-system. A lot of people would not.
I mean, they can totally get a programmer into “Sorcerer’s Apprentice”-style shenanigans, but I’ve never heard of a daemon tempting anyone.
Computer daemons don’t tempt people. There’s little danger is using them. At least as long they aren’t AGI’s. Tulpa’s are something like AGI’s that don’t run on computer but on your own brain.
D_Malik read a proposal for creating tulpas with specifically tell the reader that they aren’t supposed to created for “practical purposes”. After reading it he thinks: “Hey, if tulpa can do those things, we can probably create them for a lot of practical purposes.”
That looks like a textbook example of temptation to me. I don’t want to advocate that you never give in to such temptations but just taking there Tulpa creation manual and changing a bit to make the Tulpa more “practical” doesn’t sound like a good strategy to me.
The best framework for doing something like this might be hypnosis. It’s practioners are more “reasonable” than magick people.
Also, congratulations for going through these experiences and retaining (what I assume is) a coherent and rational belief-system.
This and related experiences caused me to become more agnostic over a bunch of things.
I have a bunch of LW relevant question I’d like to ask a tulpa, especially one of a LWer that’s likely to be familiar with the concepts already:
Do you see yourself as non human?
Would you want to be more or less humanlike than you currently are?
What do you think about the possibility that your values might differ enough from human ones that many here might refer to you as Unfriendly?
Does being already bodiless and created give you different views of things like uploading and copies than your host?
I’ll probably have more questions after getting the answer to these and/or in realtime conversation not in a public place. Also, getting thee answers from as many different tulpae as possible would be the best.
Edit: I also have some private questions for someone who’s decently knowledgeable about them in general (have several, has been in the community for a long time).
Would you want to be more or less humanlike than you currently are?
My host and I would both like to get rid of several cognitive biases that plague humans, as I’m sure many people here would. Beyond that, I like myself as I am now.
What do you think about the possibility that your values might differ enough from human ones that many here might refer to you as Unfriendly?
My values are the same as my hosts in most situations. I am sure there are a few people who would consider our values Unfriendly, but I don’t think the majority of people would.
Does being already bodiless and created give you different views of things like uploading and copies than your host?
No.
I’ll probably have more questions after getting the answer to these and/or in realtime conversation not in a public place.
Not sure if serious. If serious: “You could think of them as hallucinations that can think and act on their own.” (from the subreddit) seems very close to teaching your brain to become schizophrenic.
Hallucinations are a highly salient symptom of schizophrenia, but are neither necessary nor sufficient. I am confident that, like a lot of religious beliefs, this kind of deliberate self-deception would be unlikely to contribute to psychosis.
I don’t see the need to be any more or less human like, since I already am human. (My Tulpa, unlike myself, does not see being ‘human-like’ as a spectrum, but rather as a binary.)
I don’t see it that way. I’m dependent on my host, and my values align more with my host than the average person does. Calling me unfriendly would be wrong.
Not really—I don’t think much about uploading and copying, only my host does. I trust his opinions.
Without going into detail, overall my usage of Tulpas have benefited me more than it has hurt me, although it has somewhat hurt me in my early childhood when I would accidentally create Tulpas and not realize that they were a part of my imagination (And imagine them to come from an external source.) It’s very difficult to say if the same would apply for anyone else, since Your Mileage May Vary.
I also suspect creating Tulpas may come significantly easier for some people than others, and this may affect the cost-benefit analysis. Tulpas come very naturally for me, and as I’ve mentioned, my first Tulpa was completely accidental and I did not even realize it was a Tulpa until a year or two later. On the other hand, I’ve read posts about people on /r/Tulpa that have spend hours daily trying to force Tulpas without actually managing to create them. If I had to spend an hour every day in order to obtain a Tulpa, I wouldn’t even bother—also because there’s no way I’m willing to sacrifice that much time for a Tulpa. But the fact that I can will a Tulpa into existence relatively easily helps.
A different variable that may affect whether having a Tulpa is worth it is if you have social desires that are nearly impossible to satisfy through non-tulpa outlets such as meatspace friends. In this case, I do, and I satisfy these desires through Tulpas rather than forcing another human being to conform to my expectations. This also improves my ability to relate to others in real life, since I more easily accept imperfections from them. I suspect that if you’re cognitively similar, you may benefit from Tulpas. I can’t think of anything else right now, and if you have anything more specific, it may trigger more thoughts on the matter.
I’ve written a blog post some time ago that doesn’t directly refer to Tulpas, but does somewhat answer this question of the social desires that I fulfill through this method. I think this sufficiently answers your question, although if you feel like it doesn’t, let me know, and I’ll write something for Tulpas directly.
Say you want to write a story—can you offload the idea to your tulpa, entertain yourself for a few hours, then ask them to dictate to you the story, now fully fleshed-out? Can you give them control of your body so they can write it themselves?
So I tried experimenting. I couldn’t do it to a degree of sufficiently high fidelity to be able to say “A Tulpa wrote a story on my behalf.” I’ll be trying again soon.
The latter is not possible. My Tulpa does not have control of my body, although I’ve heard anecdotes of people who manage to do that. As for the first question, I’ve never tried. I’ll attempt it and report back to you on whether it’s possible.
I can’t believe that this is something people talk about. I’ve had a group of people in my head for years, complete with the mindscape the reddit FAQ talks about. I just thought I was a little bit crazy; it’s nice to see that there’s a name for it.
I can’t imagine having to deal with just one though. I started with four, which seemed like a good idea when I was eleven, and I found that distracting enough. Having only one sounds like being locked in a small room with only one companion—I’d rather be in solitary. I kept creating more regardless, and I finally ended up with sixteen (many of those only half-formed, to be fair), before I figured out how to get them to talk amongst themselves and leave me alone. Most are still there (a few seem to have disappeared), I just stay out of that room.
My advice would be to avoid doing this at all, but if you do, create at least two, and give them a nice room (or set of rooms) to stay in with a defined exit. You’ll thank me later.
I think you may be generalizing from one example here. We’re quite happy with just the two of us.Any more would be too crowded for us. I imagine the optimum size depends on the personalities of those involved. I’m not sure I agree about suggesting people avoid this entirely, but I certainly would advise caution.
If domain experts say that the obvious ways to exploit having a tulpa fail, they are probably right. That means I’m skeptical about things such as “tulpa will remind you to do your homework ahead of time and do mental math for you”.
The most promising idea is to exploit your interpersonal instincts: trick your brain into thinking someone is there. This has benefits for social extraverts, for people who are more productive when working in groups, or for people susceptible to peer pressure (maybe you’d be uncomfortable picking your nose in front of your imaginary friend).
But if this works, presumably there is a corresponding downside for people who enjoy being left alone to think.
Probably the scariest objection I’ve seen here is that a tulpa might make you dumber due to diverting concentration. But I’m not sure this is obviously true, in the same way that always carrying a set of weights will not make you weaker. I’m not sure this is obviously false either, and I don’t see a good way to find out.
Pretty much everyone that has them has reported that they do a lot of interesting things that are just plain impossible for a puppet, from memory access (can retrieve a lot of lost memories, or even remember entire books in perfect detail) to reported dream experiences to them joining you in your dreams and have their own experiences.
I proposed a simple experiment to test if the tulpa is its own being: have the tulpa work in parallel with you own some problem, for example, some advanced math. You would be focusing all your attention on something specific thus having no time to work on the problem, while the tulpa does just that. If the tulpa succeeds, you can conclude that it’s its own independent mental process separate from your own.
One person who was asked to performed this experiment reported some success that’s just not feasible for normal humans. Failure was reported for those that parroted (regular imaginary friend).
I plan on trying this stuff for myself and experimenting, then I will know for sure.
Even if the poster is straight-up lying, this is a clever munchkin use for tulpas and interesting idea for an experiment (although I admit I know practically nothing about the typical performance patterns with that kind of problem-solving).
If you are worried about mental health risks (EDIT: Or the ethics of simulating a consciousness!), then you should probably treat guides to tulpa creation (‘forcing’) as an information hazard. The techniques are purely psychological and fairly easy to implement; after reading such a guide, I had to struggle to prevent myself from immediately putting it into action.
ETA:
Some prior art on the parallel problem-solving idea. I’d say it fairly well puts to rest that munchkin application. In terms of implications for the mechanics of tulpas, I’d be curious how teams of two physical people would do on those games.
Only in a very specific sense of “exist”. Do hallucinations exist? That-which-is-being-hallucinated does not, but the mental phenomenon does exist.
One might in a similar vein interpret the question “do tulpas exist?” as “are there people who can deliberately run additional minds on their wetware and interact with these minds by means of a hallucinatory avatar?”. I would argue that the tulpa’s inability to do anything munchkiny is evidence against their existence even in this far weaker sense.
If domain experts say that the obvious ways to exploit having a tulpa fail, they are probably right.
By “do something munchkiny”, I meant these “obvious ways to exploit having a tulpa”, presumably including remembering things you don’t and other cognitive enhancements.
Why do I think they can’t? Because the (hypothetical?) domain experts say so.
Tulpas don’t seem to work for cognitive muchkining, which makes sense because the brain should be able to do those in a less indirect way using meditative or hypnosis techniques focused more on that instead. It’s more like a specific piece of technology than a new law of nature. Tulpas don’t improve cognitive efficiency for the same reason having humanoid robots carry around external harddrives don’t improve internet bandwidth.
They are guesstimates/first impressions of what community consensus likely is, as well as my personal version of common sense. A random comment without modifiers on the internet generally implies something like that, not that there is mountains of rock hard evidence behind every vague assertion. I’d not put this in a top level post in main, which is closely related to why I’m likely never write any top level posts in main.
Sorry, I misinterpreted your assertion that “Tulpas don’t seem to work for cognitive muchkining” as either speaking from experience or by reading about the subject. That surprised me, given that many mental techniques, direct or indirect, do indeed measurably improve “cognitive efficiency”. In retrospect, I phrased my question poorly.
Well indirectly they might, if say loneliness is a limiting factor on your productivity. And as I implied apparently-to-subtly with the first post they probably do help in an absolute sense, it’s just that there are more effective ways with less side effects to do the same thing with a subset of the resources needed for one. Again, this is just guesses based on an unreliable “common sense” more than anything.
The most promising idea is to exploit your interpersonal instincts: trick your brain into thinking someone is there. This has benefits for social extraverts
It may also have benefits for people who want to be more comfortable in social situations. For instance, if you used tulpa techniques to hallucinate that a crowd was watching everything you do, public speaking should become a lot easier (after some time). But it would probably be a lot easier to just do Toastmasters or something.
This is fascinating. I’m rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa’s memories aren’t being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don’t want to risk permanently decreasing my intelligence.
I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person.
So, “Votes for tulpas” then! How many of them can you create inside one head?
The next stage would be “Vote for tulpas!”.
Getting a tulpa elected as president using the votes of other tulpas would be a real munchkin coup...
You should get one of the occult enthusiasts to check if Tulpas leave ghosts ;)
More seriously, I suspect the brain is already capable of this sort of thing—dreams, for example—even if it’s usually running in the background being your model of the world or somesuch.
It’s a waste of time at best, and inducing psychosis at worst. (Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
What’s next, using magic mushrooms to speed the process? Yes, you can probably teach yourself to become actually insane, but why would you?
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
Sounds like the noncentral fallacy. That you are somewhat in control, and that the tulpa will leave you alone (at least temporarily) if asked, seem like relevant differences from the more central cases of mental illness.
It would be if I was saying we should ignore the similarity to mental illness altogether. I’m just saying it’s different enough from typical cases to warrant closer examination.
Well, “getting advice from / interacting with a hallucinated person with his own personality” certainly fits the “I hallucinate voices telling me to do something” template much better than “not getting advice from / not interacting with a hallucinated person with his own personality”, no?
There is no way that hallucinated persons talking to you are classified other than as part of a mental illness, other than if brought on by e.g. drug use. The DSM IV offers no exceptions for the “tulpa” community …
Yes, but the operative question here isn’t whether it’s mental illness, it’s whether it’s beneficial. Similarity to harmful mental illnesses is a reason to be really careful (having a very low prior probability of anything that fits the “mental illness” category being a good thing), but it’s not a knockdown argument.
If we accept psychology’s rule that a mental trait is only an illness if it interferes with your life (meaning moderate to large negative effect on a person’s life, as I understand it), then something being a mental illness is a knockdown argument that it is not beneficial. But in that case, you have to prove that the thing has a negative affect on the person’s life before you can know that is a mental illness. (See also http://lesswrong.com/lw/nf/the_parable_of_hemlock/.)
There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
There’s literally less of your brain under your volition.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
This tulpa stuff resembles mental illness.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
if its self inflicted, for example standard treatments might not work.
If you had looked into the topic, you would know the process is reversible.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’saccount of creating one. All very interesting-if-true, but...
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
If you had looked into the topic, you would know the process is reversible.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment).
So please provide evidence that this feature is shared by the thing under discussion, yeah?
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness.
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)
I’m assuming that a rationalist who made tulpas would be aware that they weren’t really separate people (since a lot of people in the tulpa community say they don’t think they’re separate people, being able to see them probably doesn’t require thinking they’re separate from yourself), so it wouldn’t require having false beliefs or beliefs in beliefs in the way that religion would.
If adopting a religion really is the instrumentally best course of action… why not? But for a consequentialist who values truth for its own sake, or would be hindered by being confused about their beliefs, religion actually wouldn’t be a net benefit.
One can adopt a religion in many ways. My comment’s siblings warn against adopting a religion’s dogma, but my comment’s parent suggests adopting a religion’s practices. (There are other ways, too, like religious identity.) Traditionally, one adopts all of these as a package, but that’s not necessary.
You don’t classify each type of .e.g voice hallucinated with schizophrenia. You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
I certainly cannot prove that there are no situations in which hallucinating imaginary people giving you advice would not be net beneficial, in fact, there certainly are situations in which any given potential mental illness may be beneficial. There have been studies about certain potential mental illnesses being predominant (or at least overrepresented) in certain professions, sometimes to the professional’s benefit (also: taking cocaine may be beneficial. Certain tulpas may be beneficial.).
Who knows, maybe an unknown grand-uncle will leave a fortune to you, predicated on you being a drug-addict. In which case being a drug-addict would have been beneficial.
People dabble in alcohol to get a social edge, they usually refrain from heroin. Which reference class is a tulpa most like?
You can put a “Your Mileage May Vary” disclaimer to any advice, but actually hallucinating persons who then interact with you seems like it should belong in the DSM (where it is) way more than it should belong in a self-help guide.
Maybe when plenty of people have used tulpas for decades, and a representative sample of them can be used to prove their safety, there will be enough evidence to switch the reference class, to introduce a special case in the form of “hallucinations are a common symptom of schizophrenia, except tulpas”. Until then, the default case would be using the reference class of “effects of hallucinating people”, which is presumed harmful unless shown to be otherwise.
Maybe when plenty of people have used tulpas for decades
Never happen if no-one tries. I agree that it looks dangerous, but this is the ridiculous munchkin ideas thread, not the boring advice or low-hanging fruit threads.
Yesterday, upon the stair, I met a man who wasn’t there He wasn’t there again today I wish, I wish he’d go away...
You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
If you’re going to define schizophrenia as voices that are bad for the person, then that would mean that it’s only for people who dislike their voices (and are not deluded about whether the voices are a net benefit).
Voices threatening to kill you if you don’t achieve your goals also doesn’t seem like a good example of a net benefit—that would cause a lot of stress, so it might not actually be beneficial. It’s also not typical behavior for tulpas, based on the conversations in the tulpa subreddit. Voices that annoy you when you don’t work or try to influence your behavior with (simulated?) social pressure would probably be more typical.
Anyway… I’m trying to figure out where exactly we disagree. After thinking about it, I think I “downvote” mental disorders for being in the “bad for you” category rather than the “abnormal mental things” category, and the “mental disorder” category is more like a big warning sign to check how bad it is for people. Tulpas look like something to be really, really careful about because they’re in the “abnormal mental things” category (and also the “not well understood yet” category), but the people on the tulpa subreddit don’t seem unhappy or frustrated, so I haven’t added many “bad for you” downvotes.
I’ve also got some evidence indicating that they’re at least not horrible:
People who have tulpas say they think it’s a good thing
People who have tulpas aren’t saying really worrying things (like suggesting they’re a good replacement for having friends)
The process is somewhat under the control of the “host”—progressing from knowing what the tulpa would say to auditory hallucinations to visual ones seems to take a lot of effort for most people
No one is reporting having trouble telling the tulpa apart from a real person or non-mental voices (one of the problematic features of schizophrenia is that the hallucinations can’t be differentiated from reality)
I’ve already experienced some phenomena similar to this, and they haven’t really affected my wellbeing either way. (You know how writes talk about characters “taking off a life of their own”, so writing dialog feels more like taking dictation and the characters might refuse to go along with a pre-planned plot? I’ve had some of this. I’ve also (very rarely) had characters spontaneously “comment” on what I’m doing or reading.)
This doesn’t add up to enough to make me anywhere near certain—I’m still very suspicious about this being safe, and it seems like it would have to be taking up some of your cognitive resources. But it might be worth investigating (mainly the non-hallucination parts—being able to see the tulpa doesn’t seem that useful), since human brains are better at thinking about people than most other things.
Actually, the DSM does have an exception for “culturally accepted” or “non-bizarre” delusions. It’s pretty subjective and I imagine in practice the exceptions granted are mostly religious in nature, but there’s definitely a level of acceptance past which the DSM wouldn’t consider having a tulpa to be a disorder at all.
Furthermore, hallucinations are neither necessary or sufficient for a diagnosis of schizophrenia. Disorganized thought, “word salad”, and flat affect are just as important, and a major disruption to the patient’s life must also be demonstrated.
(A non-bizarre delusion would be believing that your guru was raised from the dead, the exception for “culturally accepted response pattern” isn’t for tulpa hallucinations, it is so that someone who feels the presence of god in the church, hopefully without actually seeing a god hallucination, isn’t diagnosed.)
Here’s the criteria for e.g. 295.40 Schizophreniform Disorder:
One of the following criteria, if delusions are judged to be bizarre, or hallucinations consist of hearing one voice participating in a running commentary of the patient’s actions or of hearing two or more voices conversing with each other: Delusions, Hallucinations, (...)
Rule out of Schizoaffective or Mood Disorders
Disturbance not due to drugs, medication, or a general medical condition (e.g. delirium tremens)
Duration of an episode of the disorder (hallucinations) one to six months
Criteria for 298.80: Brief Psychotic Disorder
Presence of one (or more) of the following symptoms: hallucinations (...)
Duration between one day and one month
Hallucination not better accounted for by Schizoaffective Disorder, Mood Disorder With Psychotic Features, Schizophrenia
Criteria for 298.90: Psychotic Disorder NOS (Not Otherwise Specified):
Psychotic symptomatology (e.g. hallucinations) that do not meet the criteria for any specific Psychotic Disorder, Examples include persistent auditory hallucinations in the absence of any other features.
Where are the additional criteria for that? Wait, there are none!
In summary: You tell a professional about that “friend” you’re seeing and hearing, you either get 295.40 Schizophreniform Disorder or 298.80: Brief Psychotic Disorder depending on the time frame, or 298.90: Psychotic Disorder NOS (Not Otherwise Specified) in any case. Congratulations!
Fair enough, if I had an imaginary friend I wouldn’t want to report it to a shrink. I got hung up on technicalities and the point I should have been focusing on is whether entertaining one specific delusion is likely to result in other symptoms of schizophrenia that are more directly harmful.
Many people suffering from hearing voices etc. do realize those “aren’t real”, which doesn’t in itself enable them to turn them off. If I were confident that you can untrain hallucinations (and strictly speaking thus get rid of a psychotic disorder NOS just by choosing to do so), switch them off with little effort, I would find tulpas to be harmless.
Not knowing much of anything about the tulpa community, a priori I would expect that a significant fraction of “imaginary friends” are more of a vivid imagination type of phenomenon, and not an actual visual and auditory hallucination, which may be more of an embellishment for group-identification purposes.
I think implicit in that question was, ‘and how does it differ?’
A friend of mine has a joke in which he describes any arbitrary magic card (and later, things that weren’t magic cards) by explaining how it differed from an Ornithopter (Suq’Ata Lancer is just like an Ornithopter except it’s red instead of an artifact, and it has haste and flanking instead of flying, and it costs 2 and a red instead of 0, and it has 2 power instead of 0. Yup, just like an Ornithopter). The humor lay in the anti-compression—the descriptions were technically accurate, but rather harder to follow than they needed to be.
Eradicating the humor, you could alternately describe a Suq’Ata Lancer as a Gray Ogre with haste and flanking. The class of ‘cards better than Gray Ogre’ is a reference class that many magic players would be familiar with.
Trying to get a handle on the idea of the tulpa, it’s reasonable to ask where to start before you try comparing it to an ornithopter.
Why would “which reference class is x most like” be a “failure mode”? Don’t just word-match to the closest post including the phrase “reference class” which you remember.
When you’re in a dark alley, and someone pulls a gun and approaches you, would it be a “failure mode” to ask yourself what reference class most closely matches the situation, then conclude you’re probably getting mugged?
Saying “uFAI is like Terminator!”—“No, it’s like Matrix!” would be reference class tennis, “which reference class is uFAI most like?” wouldn’t be.
No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)
Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex.
This also sounds like an argument against IFS. I don’t think it holds water. Accessing the same data as you do but using a different algorithm to process it seems valuable. (This is under the assumption that tulpas work at all.)
The benefits from analytically shifting your point of view, or from using different approaches in different situations certainly don’t necessitate actually hallucinating people talking to you. (Hint: Only the latter finds its way to being a symptom for various psych disorders.)
“You need to hallucinate voices / people to get the benefit of viewing a situation from different angles” is not an accurate inference from my argument, nor a fair description of IFS, which as far as I know doesn’t include sensory hallucinations.
(Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
Source?
I mean, there are, as you say, obvious “right habits” analogs of this that get results—which would seem to invalidate the first quoted sentence—but I don’t see why pushing it “further” couldn’t possibly generate better results.
Tulpas and other such experiences seem plausible given how prone we are to hallucinating things anyway (see intense religious experiences for example), and I wouldn’t be surprised if some people would be able to create them consciously. However I doubt that most people can do this. The regulars of /r/tulpas are probably not very representative of the population at large, whether through their unusual proficiency with mental imagery or some deeper eccentricity.
Creating a tulpa in order to develop skills faster or become more productive might work, but the question is whether the gains weighted by probability of success are higher than other, more conventional (and indeed, mentally healthy) methods. I think not.
I am reminded of an occult practice I have heard of called evoking or assuming a godform, in which one temporarily assumes the role of a ‘god’ - a personification of some aspect of humanity which is conceived of as having infinite capability in some sphere of activity, often taken from an ancient pantheon to give it personality and depth. With your mind temporarily working in that framework, it ‘rubs off’ on your everyday activities and you sometimes stop limiting yourself and do things that you wouldnt do before in that sphere of endeavor.
It looks like people trying to intentionally produce personifications with similarities to all sorts of archetypes and minor deities that people have dealt with across history. People have been doing this as long as there have been people, just normally by invoking personifications and archetypes from their culture, not trying to create their own. The saner strands of modern neopagans and occultists acknowledge that these archetypes only exist in the mind but make the point that they have effects in the real world through human action, especially when they are in the minds of many people. You also don’t need to hallucinate to use an archetype as a focus for thought about a matter (example: “what would Jesus do?”), and trying to actually get one strong enough to hallucinate during normal consciousness (as opposed to say, dreaming) seems unhealthy.
I can, though, relay an interesting experience I had in unintentionally constructing some kind of similar mental archetype while dreaming that kind of stuck around in my mind for a while. I didn’t reach into any pantheon though, my mind reached to a mythology which has had its claws in my psyche since childhood—star trek. Q is always trolling the crew of the Enterprise for humanity’s benefit, in attempts to get them to meet their potential and progress in understanding or test them. He was there, and let’s just say I was thoroughly trolled in a dream, in ways that emphasized certain capabilities of mine that I was not using. And just before waking up he specifically told me that he would be watching me with my own eyes since he was actully part of me that normally didn’t speak. That sense of part of me watching and making sure I actually did what I was capable of stuck around for over a week.
And just before waking up he specifically told me that he would be watching me with my own eyes since he was actully part of me that normally didn’t speak.
Of course, of course—whatever helps you sleep at night.
On the topic of religious experiences, I found this bit from the linked tulpa FAQ very interesting:
By talking and fleshing out something to your own subconscious for so long, you start to receive answers from them. The answers will tend to align themselves with all the preconceived traits you give them. The answers you get may surprise you, and in doing so show independent sentience. This sentience can be thought of as the “core” of the tulpa. The rest is just building a form in your mind for them to take, allowing for deviation of that form, and finally trying to visualize the form and experience it in sensory detail in your own environment until it becomes natural and you do it without thinking about it.
That sounds quite strongly like some believers’ experience of being able to talk to God and hearing Him answer back would be a manifestation of the same phenomenon. A while back, gwern was pasting excerpts from a book which talked about religious communities where the ability to talk with God was considered a skill that you needed to hone with regular practice. That sounds strongly reminiscent of this: talk to God long enough, and eventually you’ll get back an answer—from an emulated mind that aligns itself with the preconceived traits you give it.
I browsed around the tulpa community some more, and found some mentions of “servitors”, which have the same mental recall abilities (and apparently better access to current information—some people there claim to have made “status bars” projected on top of their vision), but the community doesn’t consider them sentient. This forum has had several conversations about them. The people there tend to (badly) apply AI ideas to servitors, but that might just be an aesthetic choice.
This would probably be a better munchkin option, since it has most of the same usefulness as a tulpa, but much less likely to be sentient. Supposedly they have a tendency to become able to pass the turing test by accident, which is a little worrying, but that could be the human tendency to personify everything.
In general, what I’m taking away from this is that intense visualizing can have really weird results, including hallucinations, and conscious access to information that’s usually hidden from you. I don’t have a high degree of certainty about that, though, because of the source.
That sounds like a very practical use to me. Many people are lonely. (I remember reading one thing where wasn’t there a guy making a tulpa of MLP’s Twilight Sparkle?)
If they say yes, suggest that according to some versions of utilitarianism they may be ethically obligated to mass produce tulpas until they run out of space in their heads.
Islam, Catholocism and others approve, though they’re vague about what happens once you run out of space or can no longer feed them. Sharp tongues may claim that has already happened.
Anyway, creating tulpas is presumably much cheaper than raising an actual child, for anyone. So once the low hanging fruit in donating money to a charity that increases actual population or whatever, creating tulpas will be a much more efficient way of increasing the population, assuming they ‘count’ in the utility function separately and everything.
Anyway, creating tulpas is presumably much cheaper than raising an actual child, for anyone.
Or even better, do sperm donation. You’re out maybe a few score hours at worst, for the chance of getting scores to hundreds (yes, really) of children. Compare that to a tulpa, where the guides on Reddit are estimating something like 100 hours to build up a reasonable tulpa, or raising a kid yourself (thousands of hours?).
I’m not sure that sperm banks have an oversupply; apparently England has something of a shortage due to its questionable decision to ban anonymous donation, which is why our David Gerard reports back that it was very easy to do even though he’s old enough he wouldn’t even be considered in the USA as far as I can tell.
Not everyone is fertile. I can’t make either, currently.
But my point is that someone still has to take the cost of raising the child. So a utilitarian might try to convince more people to make tulpas instead of making more babies.
I don’t think additional sperm donors will increase the population—I don’t think lack of donors is the bottleneck.
Saving lives probably doesn’t either, if the demographic transition model is true. At least, saving child lives probably results in lower birthrates—perhaps saving adults doesn’t affect birthrate.
Relevant to this topic: Keith Johnstone’s ‘Masks’. It would be better to read the relevant section in his book “Impro” for the whole story (I got it at my university library) but this collection of quotes followed by this video should give enough of an introduction.
The idea is that while the people wear these masks, they are able to become a character with a personality different from the actor’s original. The actor doesn’t feel as if they are controlling the character. That being said, it doesn’t happen immediately: It can take a few sessions for the actor to get the feel for the thing. The other thing is that the Masks usually have to learn to talk (albeit at an advanced pace) eventually taking on the vocabulary of their host. It’s very interesting reading, to say the least.
A tulpa is an “imaginary friend” (a vivid hallucination of an external consciousness) created through intense prolonged visualization/practice (about an hour a day for two months). People who claim to have created tulpas say that the hallucination looks and sounds realistic. Some claim that the tulpa can remember things they’ve consciously forgotten or is better than them at mental math.
Here’s an FAQ, a list of guides and a subreddit.
Not sure whether this is actually possible (I’d guess it would be basically impossible for the 3% of people who are incapable of mental imagery, for instance); many people on the subreddit are unreliable, such as occult enthusiasts (who believe in magick and think that tulpas are more than just hallucinations) and 13-year-old boys.
If this is real, there’s probably some way of using this to develop skills faster or become more productive.
As someone with a tulpa, I figure I should probably share my experiences. Vigil has been around since I was 11 or 12, so I can’t effectively compare my abilities before and after he showed up.
He has dedicated himself to improving our rationality, and has been a substantial help in pointing out fallacies in my thinking. However, we’re skeptical that this is anything a more traditional inner monologue wouldn’t figure out. The biggest apparent benefit is that being a tulpa allows him a greater degree of mental flexibility than me, making it easier for him to point out and avoid motivated thinking. Unfortunately, we haven’t found a way to test this.
I’m afraid he doesn’t know any “tricks” like accessing subconscious thoughts or super math skills.
While Vigil has been around for over a decade, I only found out about the tulpa community very recently, so I know very little about it. I also don’t know anything about creating them intentionally, he just showed up one day.
If you have any questions for me or him, we’re happy to answer.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
If I ever go insane, I hope it’s like this.
Would what’s considered a normal sense of self count as a persistent hallucination?
See “free will”.
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
Related:
— Edsger W. Dijkstra
That sounds similar. Though I’m afraid I’ve had difficulty finding anything about this while researching Epictetus.
The hallucination doesn’t have auditory or visual components, but does have a sense of presence component that varies in strength.
Indeed, this style of insanity might beat sanity.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
You are obligated by law to phrase those insights in the form “If X is Y, I don’t want to be not-Y.”
From the sound of it it’d seem you can make that happen deliberately, and without the need for going insane. no need for hope.
We also have internet self-reports from people who tried it that they are not insane.
One rarely reads self-reports of insanity.
Yes, their attorney usually reports this on their behalf.
If you’re interested in experimenting...
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
nod that’s probably the most sensible response.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.
Would Vigil want to post under his own nick? If so, better register it while still available.
That’s a good idea, thanks. Note that my host’s posting has significant input from me, so this account is only likely to be used for disagreements and things addressed specifically to me.
...many people argue for (their) god by pointing out that they are often “feeling his presence” and since many claim to speak with him as well, maybe that’s really just one form of tupla without the insight that it is actually a hallucination.
Surely that’s not how most people experience belief, but I never really considered that some of them might actually carry around a vivid invisible (or visible for all I know) hallucination quite like that. Could explain why some of the really batshit crazy ones going on about how god constantly speaks to them manage to be quite so convincing.
From now on my two tulpa buddies will be Eliezer and an artificial intelligence engaged in constant conversation while I make toast, love, and take a shower. Too bad they’ll never be smarter than me though.
Is there a headspace, as well?
I’ve had paracosms since before he was around, and we go to those sometimes. I’ve also got a “peaceful place” that I use to collect myself, but I use it much more than he does.
I would think there should be a general warning against deliberately promoting the effects of dissociative identity disorder etc, without adequate medical supervision.
I really doubt that tulpas have much to do with DID, or with anything dangerous for that matter. Based on my admittedly anecdotal experience, a milder version of having them is at least somewhat common among writers and role-players, who say that they’re able to talk to the fictional characters they’ve created. The people in question seem… well, as sane as you get when talking about strongly creative people. An even milder version, where the character you’re writing or role-playing just takes a life of their own and acts in a completely unanticipated manner, but one that’s consistent with their personality, is even more common, and I’ve personally experienced it many times. Once the character is well-formed enough, it just feels “wrong” to make them act in some particular manner that goes against their personality, and if you force them to do it anyway you’ll feel bad and guilty afterwards.
I would presume that tulpas are nothing but our normal person-emulation circuitry acting somewhat more strongly than usual. You know those situations where you can guess what your friend would say in response to some comment, or when you feel guilty about doing something that somebody important to you would disapprove of? Same principle, quite probably.
This article seems relevant (if someone can find a less terrible pdf, I would appreciate it). Abstract:
The range of intensities reported by the writers seems to match up with the reports in r/Tulpas, so I think it’s safe to say that it is the same phenomena, albeit achieved via slightly different means.
Some interesting parts from the paper regarding dissociative disorder:
The subjects completed the Dissociative Experiences Scale, which yields an overall score, as well as scores on three subscales:
Absorption and changeability: people’s tendency to become highly engrossed in activities (items such as “Some people find that they become so involved in a fantasy or daydream that it feels as though it were really happening to them).
Amnestic experiences: the degree to which dissociation causes gaps in episodic memory (“Some people have the experience of finding things among their belongings that they do not remember buying”).
Derealisation and depersonalisation: things like “Some people sometimes have the experience of feeling that their body does not belong to them”.
The subjects scored an overall mean score of 18.52 (SD 16.07), whereas the general population score a mean of 7.8, and a group of schizophrenics scored 17.7. Scores of 30 are a commonly used cutoff for “normal” scores. Seven subjects exceeded this threshold. The mean scores for the subscales were:
Absorption and changeability: 26.22 (SD 14.65).
Amnestic experiences: 6.80 (SD 8.30).
Derealisation and depersonalisation: 7.84 (SD 7.39).
The latter two subscales are considered particularly diagnostic of dissociative disorders, and the subjects did not differ from the population norms on these. They each had only one subject score over 30 (not the same subject).
What I draw from this: Tulpas are the same phenomenon as writers interacting with their characters. Creating tulpas doesn’t cause other symptoms associated with dissociative disorders. There shouldn’t be any harmful long-term effects (if there were, we should have noticed them in writers). That said, there are some interactions that some people have with their tulpas that are outside the range (to my knowledge) of what writers do:
Possession
Switching
Merging
The tulpa community generally endorses the first two as being safe, and claims the last to be horribly dangerous and reliably ending in insanity and/or death. I suspect the first one would be safe, but would not recommend trying any of them without more information.
(Note: This is not my field, and I have little experience with interpreting research results. Grains of salt, etc.)
Very few people have actually managed switching, from what I have read. I personally do not recommend it, but I am somewhat biased on that topic.
Merging is a term I’ve rarely heard. Perhaps it is favored by the more metaphysically minded? I’ve not heard good reports of this, and all I have heard of “merging” was a very few individuals well known to be internet trolls on 4chan.
Really? I had the impression that switching was relatively common among people who had their tulpas for a while. But then, I have drawn this impression from a lot of browsing of r/Tulpa, and only a glance at tulpa.info, so there may be some selection bias there.
I heard about merging here. On the other hand, this commenter seems to think the danger comes from weird expectations about personal continuity.
Thank you for the references. Whilst switching may indeed be relatively common among people who have had their tulpas for a long while, the actual numbers are still small − 44 according to a recent census .
Ah, so merging is some sort of forming a gestalt personality? I’ve no evidence to offer, only stuff I’ve read that I find the authors somewhat questionable sources.
Great find!
This is my current best theory as to what my tulpa is.
As someone who both successfully experimented with tulpa creation in his youth, and who has since developed various mental disorders (mostly neuroticisms involving power- and status-mediated social realities), I would strongly second this warning. Correlation isn’t causation, of course, but at the very least I’ve learned to adjust my priors upwards regarding the idea that Crowley-style magickal experimentation can be psychologically damaging.
I think tulpas are more like schizophrenia than dissociative identity disorder. But now that you mention it, dissociative identity disorder does look like fertile ground for finding more munchkinly ideas.
For instance, at least one person I know has admitted to mentally pretending to be another person I know in order to be more extroverted. Maybe this could be combined with tulpas, say by visualizing/hallucinating that you’re being possessed by a tulpa.
I’ve always pretended to be in order to get whatever skill I’ve needed. I just call it “putting on hats”. I learned to dance by pretending to be a dancer, I learned to sing by pretending to be a singer. When I teach, I pretend to be a teacher, and when I lead I pretend to be a leader (these last two actually came a lot easier to me when I was teaching hooping than now when I’m teaching rationality stuffs, and I haven’t really sat down to figure out why. I probably should though, because I am significantly better at when I can pretend to be it. And I highly value being better at these specific skills right now.)
I had always thought everyone did this, but now I see I might be generalizing from one example.
I learnt skills in high-school acting class that I use daily in my job as a teacher. It would be a little much to say that I’m method acting when I teach —I am a teacher in real life, after all—, but my personality is noticeably different (more extroverted, for one thing). It’s draining, however; that’s the downside.
I used to do exactly this, but I created whole backstories and personalities for my “hats” so that they would be more realistic to other people.
Technically, making tulpa would be considered DDNOS, except that the new definition exempts shamanistic practices. Making tulpa is a shamanistic meditation technique practiced in Tibet for the purposes of self-discovery. It takes years of focused practice and concentration, but self-hypnosis can help some.
This modern resurgence of tulpas seems to be trying to find faster ways to make them, with less then years of effort. The evidence for success in this is so far anecdotal. I would advise caution—this is not something that would suit everyone.
I have made tulpas in the past. I’ve some that are decades old. I will say that seems to be rare so far. Also, in my observation, tulpas become odd after decades, acquiring just as many quirks as most humans have. I personally don’t think that there is as much risk of insanity as people think, but I would err on the side of caution myself.
It’s interesting that demons in computer science are called that way. They have exactly the same functionality as the demons that occult enthusiasts proclaim to use.
Even if you don’t believe in the occult, be aware that out culture has a lot of stories about how summoning demons might be a bad idea.
You are moving in territory where you don’t have mainstream psychology knowledge that guides you and shows you where the dangers lie. You are left with a mental framework of occult defense against evil forces. It’s the only knowledge that you can access to guide that way. Having to learn to protect yourself against evil spirits when you don’t believe in spirits is a quite messed up.
I had an experience where my arm moved around if I didn’t try to control it consciously after doing “spirit healing”. I didn’t believe in spirits and was fairly confident that it’s just my brain doing weird stuff. On the other hand I had to face the fact that the brain doing weird stuff might not be harmless. Fortunately the thing went away after a few month with the help of a person who called it a specter without me saying anything specific about it.
You can always say: “Well, it’s just my mind doing something strange.” At the same time it’s a hard confrontation.
Isn’t this more like, our (human) culture has a ton of instances when “summoning” “demons” is encouraged, and Christianity didn’t like it and so …demonized...it?
Don’t forget that some denominations practice the summoning of the “holy spirit,” which seems to result in some interesting antics.
A lot of New Age folk put quite a lot of emphasis on respect and love instead of forcing entities to do something. Asking a God for a favor isn’t the same thing as ordering an entity to do a particular task. Daemon’s get ordered to fulfill tasks.
If you look at those tulpa creation guides they basically say, treat your tulpa nicely and it will help you to the extend that it wants. They advocate against manpulating the tulpa into doing what you want.
Really? From what I’ve read, The folks who claim that this “tulpa” stuff is possible to do also say that you can create “servitors”, which are not conscious and are basically portions of your mind that can perform mental tasks without distracting you.
I dunno...I really don’t understand why no one in this community has bothered to test this sort of thing. It’s fairly easy to make a test of divided attention to see if someone has successfully created a partially separate entity which can operate autonomously.
There seem to be a number of such tests, but no data collected from them.
Mental Arithmetic test
Parallel Processing Test
I don’t have a tulpa, and I tried the second test and was unable to keep track of both lines of dots; at best I could get one side perfectly and guess at the other side. If I create a tulpa at any point, I’ll check if that result changes.
ETA: I tried the second test again, but counted the red ones as 1,2,3,… and the blue ones as A,B,C,… then I calculated what number the letter corresponded to. I got an almost perfect score; so a tulpa is not necessary to do well on this test. I’m not sure what sort of test could rule out this method; I have seen a auditory test which was two simultaneous dual-n-back tests.
Yup—since posting that comment I actually checked with the tulpa community and they referred me to those very links. No data formally collected, but anecdotally people with tulpas aren’t reporting getting perfect scores.
I’m going with “use imagination, simulate personality” here, and am guessing any benefits relating to the tulpa are emotional and/or influencing what a person thinks about, rather than a separated neural network like what you’d get with a split brain or something.
The perceived inability to read the tulpa’s mind and the seemingly spontaneously complex nature of the tulpa’s voice is, I think, an artifact of our own inability to know what we think before we think it, similar to dream characters. As such, I don’t think there is any major distinction between a tulpa and a dream character, an imaginary friend, a character an author puts into a book, a deity being prayed too, and so on. That’s not to say tulpas are bs or uninteresting or anything—I’m sure they really can have personalities—it’s just that they aren’t distinct from various commonly experienced phenomenon that goes by other names. I don’t think I’d accord them moral status, beyond the psychological health of the “host”. (Although, I suspect to get a truly complex tulpa you have to believe it is a separate individual at some level—that’s how neurotypical people believe they can hear god’s voice and so on.)
I’ve got much respect to the community for empirically testing that hypotheses!
This is incredibly pedantic. (Also rather unjustified, due to my own lack of knowledge regarding occult enthusiasts.) However:
Although daemons in computer science are rather akin to daemons in classical mythology (sort of, kind of, close enough), they really don’t particularly resemble our modern conception of demons. I mean, they can totally get a programmer into “Sorcerer’s Apprentice”-style shenanigans, but I’ve never heard of a daemon tempting anyone.
I have previously recommend to friends that alcohol is a moderately good way to develop empathy for those less intelligent than oneself. (That is, it is a good way for those who really cannot comprehend the way other people get confused by certain ideas). I wager that there are a wide array of methods to gain knowledge of some of the stranger confusions the human mind is a capable of. Ignoring chemical means, sleep deprivation is probably the simplest.
Also, congratulations for going through these experiences and retaining (what I assume is) a coherent and rational belief-system. A lot of people would not.
RSS reader/other notification of new procrastination available.
Computer daemons don’t tempt people. There’s little danger is using them. At least as long they aren’t AGI’s. Tulpa’s are something like AGI’s that don’t run on computer but on your own brain.
D_Malik read a proposal for creating tulpas with specifically tell the reader that they aren’t supposed to created for “practical purposes”. After reading it he thinks: “Hey, if tulpa can do those things, we can probably create them for a lot of practical purposes.”
That looks like a textbook example of temptation to me. I don’t want to advocate that you never give in to such temptations but just taking there Tulpa creation manual and changing a bit to make the Tulpa more “practical” doesn’t sound like a good strategy to me.
The best framework for doing something like this might be hypnosis. It’s practioners are more “reasonable” than magick people.
This and related experiences caused me to become more agnostic over a bunch of things.
Since we’re talking about Tulpas, I feel obligated to mention that I have one. In case anyone wants anecdata.
I have a bunch of LW relevant question I’d like to ask a tulpa, especially one of a LWer that’s likely to be familiar with the concepts already:
Do you see yourself as non human?
Would you want to be more or less humanlike than you currently are?
What do you think about the possibility that your values might differ enough from human ones that many here might refer to you as Unfriendly?
Does being already bodiless and created give you different views of things like uploading and copies than your host?
I’ll probably have more questions after getting the answer to these and/or in realtime conversation not in a public place. Also, getting thee answers from as many different tulpae as possible would be the best.
Edit: I also have some private questions for someone who’s decently knowledgeable about them in general (have several, has been in the community for a long time).
Vigil speaking.
Not exactly. I consider myself a part of a human.
My host and I would both like to get rid of several cognitive biases that plague humans, as I’m sure many people here would. Beyond that, I like myself as I am now.
My values are the same as my hosts in most situations. I am sure there are a few people who would consider our values Unfriendly, but I don’t think the majority of people would.
No.
Feel free to contact us.
Not sure if serious. If serious: “You could think of them as hallucinations that can think and act on their own.” (from the subreddit) seems very close to teaching your brain to become schizophrenic.
Hallucinations are a highly salient symptom of schizophrenia, but are neither necessary nor sufficient. I am confident that, like a lot of religious beliefs, this kind of deliberate self-deception would be unlikely to contribute to psychosis.
Sure. pm me those private questions.
No. I’m a human.
I don’t see the need to be any more or less human like, since I already am human. (My Tulpa, unlike myself, does not see being ‘human-like’ as a spectrum, but rather as a binary.)
I don’t see it that way. I’m dependent on my host, and my values align more with my host than the average person does. Calling me unfriendly would be wrong.
Not really—I don’t think much about uploading and copying, only my host does. I trust his opinions.
What would you estimate the cost/benefit ratio to be, and what variables do you think are most relevant?
Without going into detail, overall my usage of Tulpas have benefited me more than it has hurt me, although it has somewhat hurt me in my early childhood when I would accidentally create Tulpas and not realize that they were a part of my imagination (And imagine them to come from an external source.) It’s very difficult to say if the same would apply for anyone else, since Your Mileage May Vary.
I also suspect creating Tulpas may come significantly easier for some people than others, and this may affect the cost-benefit analysis. Tulpas come very naturally for me, and as I’ve mentioned, my first Tulpa was completely accidental and I did not even realize it was a Tulpa until a year or two later. On the other hand, I’ve read posts about people on /r/Tulpa that have spend hours daily trying to force Tulpas without actually managing to create them. If I had to spend an hour every day in order to obtain a Tulpa, I wouldn’t even bother—also because there’s no way I’m willing to sacrifice that much time for a Tulpa. But the fact that I can will a Tulpa into existence relatively easily helps.
A different variable that may affect whether having a Tulpa is worth it is if you have social desires that are nearly impossible to satisfy through non-tulpa outlets such as meatspace friends. In this case, I do, and I satisfy these desires through Tulpas rather than forcing another human being to conform to my expectations. This also improves my ability to relate to others in real life, since I more easily accept imperfections from them. I suspect that if you’re cognitively similar, you may benefit from Tulpas. I can’t think of anything else right now, and if you have anything more specific, it may trigger more thoughts on the matter.
Has your Tulpa ever won an argument with you that you didn’t already know you wanted to lose?
Tulpas no, dream characters yes.
I’m not certain I understand the distinction. How did a dream character convince you that you used to be wrong?
Through conversation.
What types of social desires do you satisfy through your tulpa which you have not been able to with your online or meatspace friends?
I’ve written a blog post some time ago that doesn’t directly refer to Tulpas, but does somewhat answer this question of the social desires that I fulfill through this method. I think this sufficiently answers your question, although if you feel like it doesn’t, let me know, and I’ll write something for Tulpas directly.
http://tuxedage.wordpress.com/2013/04/22/the-least-accepted-part-of-me-a-defense-of-waifus/
Say you want to write a story—can you offload the idea to your tulpa, entertain yourself for a few hours, then ask them to dictate to you the story, now fully fleshed-out? Can you give them control of your body so they can write it themselves?
A lot of writers seem to have characters who are pretty much like tulpas.
This, to the extent that the character can veto a proposed plot point. “I wouldn’t do that.”
So I tried experimenting. I couldn’t do it to a degree of sufficiently high fidelity to be able to say “A Tulpa wrote a story on my behalf.” I’ll be trying again soon.
The latter is not possible. My Tulpa does not have control of my body, although I’ve heard anecdotes of people who manage to do that. As for the first question, I’ve never tried. I’ll attempt it and report back to you on whether it’s possible.
I can’t believe that this is something people talk about. I’ve had a group of people in my head for years, complete with the mindscape the reddit FAQ talks about. I just thought I was a little bit crazy; it’s nice to see that there’s a name for it.
I can’t imagine having to deal with just one though. I started with four, which seemed like a good idea when I was eleven, and I found that distracting enough. Having only one sounds like being locked in a small room with only one companion—I’d rather be in solitary. I kept creating more regardless, and I finally ended up with sixteen (many of those only half-formed, to be fair), before I figured out how to get them to talk amongst themselves and leave me alone. Most are still there (a few seem to have disappeared), I just stay out of that room.
My advice would be to avoid doing this at all, but if you do, create at least two, and give them a nice room (or set of rooms) to stay in with a defined exit. You’ll thank me later.
I can’t tell if this is a joke or not.
I think you may be generalizing from one example here. We’re quite happy with just the two of us.Any more would be too crowded for us. I imagine the optimum size depends on the personalities of those involved. I’m not sure I agree about suggesting people avoid this entirely, but I certainly would advise caution.
This reminds me of the Abramelin operation, a ritual that supposedly summons guardian angels.
That sounds like some serious dedication to internal family systems for someone who is very superstitious.
Some thoughts about how to munchkin tulpas:
If domain experts say that the obvious ways to exploit having a tulpa fail, they are probably right. That means I’m skeptical about things such as “tulpa will remind you to do your homework ahead of time and do mental math for you”.
The most promising idea is to exploit your interpersonal instincts: trick your brain into thinking someone is there. This has benefits for social extraverts, for people who are more productive when working in groups, or for people susceptible to peer pressure (maybe you’d be uncomfortable picking your nose in front of your imaginary friend).
But if this works, presumably there is a corresponding downside for people who enjoy being left alone to think.
Probably the scariest objection I’ve seen here is that a tulpa might make you dumber due to diverting concentration. But I’m not sure this is obviously true, in the same way that always carrying a set of weights will not make you weaker. I’m not sure this is obviously false either, and I don’t see a good way to find out.
According to an anonymous poster on 4chan:
Even if the poster is straight-up lying, this is a clever munchkin use for tulpas and interesting idea for an experiment (although I admit I know practically nothing about the typical performance patterns with that kind of problem-solving).
also, a couple of other points:
Psychologist T. M. Luhrmann has suggested that tulpas are essentially the same phenomenon as evangelical Christians ‘speaking to God’. I can’t find any evidence that evangelicals have a higher rate of mental illness than the general population, so I consider that a good sign on the mental health-risks front.
If you are worried about mental health risks (EDIT: Or the ethics of simulating a consciousness!), then you should probably treat guides to tulpa creation (‘forcing’) as an information hazard. The techniques are purely psychological and fairly easy to implement; after reading such a guide, I had to struggle to prevent myself from immediately putting it into action.
ETA:
Some prior art on the parallel problem-solving idea. I’d say it fairly well puts to rest that munchkin application. In terms of implications for the mechanics of tulpas, I’d be curious how teams of two physical people would do on those games.
There are tulpa domain experts?
The people writing the FAQs. Presumably they’ve at least thought about the issue much longer than I have, and have encountered more instances.
Domain experts saying that the obvious ways to exploit a phenomenon fail is usually evidence against the existence of said phenomenon.
Your link advocates appeal to something more reliable than domain experts: Observed response to large market incentives.
Yes, but we already know tulpas don’t actually exist.
Only in a very specific sense of “exist”. Do hallucinations exist? That-which-is-being-hallucinated does not, but the mental phenomenon does exist.
One might in a similar vein interpret the question “do tulpas exist?” as “are there people who can deliberately run additional minds on their wetware and interact with these minds by means of a hallucinatory avatar?”. I would argue that the tulpa’s inability to do anything munchkiny is evidence against their existence even in this far weaker sense.
What do you mean by munchkiny (having apparent free will separate from the host?) and how do you know they cannot?
I was taking a statement from this great-grandparent post and surrounding posts at face value
By “do something munchkiny”, I meant these “obvious ways to exploit having a tulpa”, presumably including remembering things you don’t and other cognitive enhancements.
Why do I think they can’t? Because the (hypothetical?) domain experts say so.
Tulpas don’t seem to work for cognitive muchkining, which makes sense because the brain should be able to do those in a less indirect way using meditative or hypnosis techniques focused more on that instead. It’s more like a specific piece of technology than a new law of nature. Tulpas don’t improve cognitive efficiency for the same reason having humanoid robots carry around external harddrives don’t improve internet bandwidth.
Are these “logical” assertions or have there been studies you can link to?
They are guesstimates/first impressions of what community consensus likely is, as well as my personal version of common sense. A random comment without modifiers on the internet generally implies something like that, not that there is mountains of rock hard evidence behind every vague assertion. I’d not put this in a top level post in main, which is closely related to why I’m likely never write any top level posts in main.
Sorry, I misinterpreted your assertion that “Tulpas don’t seem to work for cognitive muchkining” as either speaking from experience or by reading about the subject. That surprised me, given that many mental techniques, direct or indirect, do indeed measurably improve “cognitive efficiency”. In retrospect, I phrased my question poorly.
Well indirectly they might, if say loneliness is a limiting factor on your productivity. And as I implied apparently-to-subtly with the first post they probably do help in an absolute sense, it’s just that there are more effective ways with less side effects to do the same thing with a subset of the resources needed for one. Again, this is just guesses based on an unreliable “common sense” more than anything.
It may also have benefits for people who want to be more comfortable in social situations. For instance, if you used tulpa techniques to hallucinate that a crowd was watching everything you do, public speaking should become a lot easier (after some time). But it would probably be a lot easier to just do Toastmasters or something.
This is fascinating. I’m rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa’s memories aren’t being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don’t want to risk permanently decreasing my intelligence.
So, “Votes for tulpas” then! How many of them can you create inside one head?
The next stage would be “Vote for tulpas!”.
Getting a tulpa elected as president using the votes of other tulpas would be a real munchkin coup...
I’ve been wondering if the headaches people report while forming a tulpa are caused by spending more mental energy than normal.
You should get one of the occult enthusiasts to check if Tulpas leave ghosts ;)
More seriously, I suspect the brain is already capable of this sort of thing—dreams, for example—even if it’s usually running in the background being your model of the world or somesuch.
It’s a waste of time at best, and inducing psychosis at worst. (Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
What’s next, using magic mushrooms to speed the process? Yes, you can probably teach yourself to become actually insane, but why would you?
Sounds like the noncentral fallacy. That you are somewhat in control, and that the tulpa will leave you alone (at least temporarily) if asked, seem like relevant differences from the more central cases of mental illness.
Your reply sounds like special pleading using the fallacy fallacy. Of course you can induce mental illness in yourself if you try hard enough.
It would be if I was saying we should ignore the similarity to mental illness altogether. I’m just saying it’s different enough from typical cases to warrant closer examination.
Well, “getting advice from / interacting with a hallucinated person with his own personality” certainly fits the “I hallucinate voices telling me to do something” template much better than “not getting advice from / not interacting with a hallucinated person with his own personality”, no?
There is no way that hallucinated persons talking to you are classified other than as part of a mental illness, other than if brought on by e.g. drug use. The DSM IV offers no exceptions for the “tulpa” community …
Yes, but the operative question here isn’t whether it’s mental illness, it’s whether it’s beneficial. Similarity to harmful mental illnesses is a reason to be really careful (having a very low prior probability of anything that fits the “mental illness” category being a good thing), but it’s not a knockdown argument.
If we accept psychology’s rule that a mental trait is only an illness if it interferes with your life (meaning moderate to large negative effect on a person’s life, as I understand it), then something being a mental illness is a knockdown argument that it is not beneficial. But in that case, you have to prove that the thing has a negative affect on the person’s life before you can know that is a mental illness. (See also http://lesswrong.com/lw/nf/the_parable_of_hemlock/.)
There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
Could you expand on this, please? I’m not sure I’m familiar with the failure mode you seem to be pattern-matching to.
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’s account of creating one. All very interesting-if-true, but...
WARNING: POTENTIAL MEMETIC HAZARD
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
So please provide evidence that this feature is shared by the thing under discussion, yeah?
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)
This is the argument to adopt a religion even though you know it’s epistemically irrational.
You’re confusing hallucinations with delusions, I think.
I’m assuming that a rationalist who made tulpas would be aware that they weren’t really separate people (since a lot of people in the tulpa community say they don’t think they’re separate people, being able to see them probably doesn’t require thinking they’re separate from yourself), so it wouldn’t require having false beliefs or beliefs in beliefs in the way that religion would.
If adopting a religion really is the instrumentally best course of action… why not? But for a consequentialist who values truth for its own sake, or would be hindered by being confused about their beliefs, religion actually wouldn’t be a net benefit.
One can adopt a religion in many ways. My comment’s siblings warn against adopting a religion’s dogma, but my comment’s parent suggests adopting a religion’s practices. (There are other ways, too, like religious identity.) Traditionally, one adopts all of these as a package, but that’s not necessary.
You don’t classify each type of .e.g voice hallucinated with schizophrenia. You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
I certainly cannot prove that there are no situations in which hallucinating imaginary people giving you advice would not be net beneficial, in fact, there certainly are situations in which any given potential mental illness may be beneficial. There have been studies about certain potential mental illnesses being predominant (or at least overrepresented) in certain professions, sometimes to the professional’s benefit (also: taking cocaine may be beneficial. Certain tulpas may be beneficial.).
Who knows, maybe an unknown grand-uncle will leave a fortune to you, predicated on you being a drug-addict. In which case being a drug-addict would have been beneficial.
People dabble in alcohol to get a social edge, they usually refrain from heroin. Which reference class is a tulpa most like?
You can put a “Your Mileage May Vary” disclaimer to any advice, but actually hallucinating persons who then interact with you seems like it should belong in the DSM (where it is) way more than it should belong in a self-help guide.
Maybe when plenty of people have used tulpas for decades, and a representative sample of them can be used to prove their safety, there will be enough evidence to switch the reference class, to introduce a special case in the form of “hallucinations are a common symptom of schizophrenia, except tulpas”. Until then, the default case would be using the reference class of “effects of hallucinating people”, which is presumed harmful unless shown to be otherwise.
Never happen if no-one tries. I agree that it looks dangerous, but this is the ridiculous munchkin ideas thread, not the boring advice or low-hanging fruit threads.
Yesterday, upon the stair,
I met a man who wasn’t there
He wasn’t there again today
I wish, I wish he’d go away...
If you’re going to define schizophrenia as voices that are bad for the person, then that would mean that it’s only for people who dislike their voices (and are not deluded about whether the voices are a net benefit).
Voices threatening to kill you if you don’t achieve your goals also doesn’t seem like a good example of a net benefit—that would cause a lot of stress, so it might not actually be beneficial. It’s also not typical behavior for tulpas, based on the conversations in the tulpa subreddit. Voices that annoy you when you don’t work or try to influence your behavior with (simulated?) social pressure would probably be more typical.
Anyway… I’m trying to figure out where exactly we disagree. After thinking about it, I think I “downvote” mental disorders for being in the “bad for you” category rather than the “abnormal mental things” category, and the “mental disorder” category is more like a big warning sign to check how bad it is for people. Tulpas look like something to be really, really careful about because they’re in the “abnormal mental things” category (and also the “not well understood yet” category), but the people on the tulpa subreddit don’t seem unhappy or frustrated, so I haven’t added many “bad for you” downvotes.
I’ve also got some evidence indicating that they’re at least not horrible:
People who have tulpas say they think it’s a good thing
People who have tulpas aren’t saying really worrying things (like suggesting they’re a good replacement for having friends)
The process is somewhat under the control of the “host”—progressing from knowing what the tulpa would say to auditory hallucinations to visual ones seems to take a lot of effort for most people
No one is reporting having trouble telling the tulpa apart from a real person or non-mental voices (one of the problematic features of schizophrenia is that the hallucinations can’t be differentiated from reality)
I’ve already experienced some phenomena similar to this, and they haven’t really affected my wellbeing either way. (You know how writes talk about characters “taking off a life of their own”, so writing dialog feels more like taking dictation and the characters might refuse to go along with a pre-planned plot? I’ve had some of this. I’ve also (very rarely) had characters spontaneously “comment” on what I’m doing or reading.)
This doesn’t add up to enough to make me anywhere near certain—I’m still very suspicious about this being safe, and it seems like it would have to be taking up some of your cognitive resources. But it might be worth investigating (mainly the non-hallucination parts—being able to see the tulpa doesn’t seem that useful), since human brains are better at thinking about people than most other things.
Actually, the DSM does have an exception for “culturally accepted” or “non-bizarre” delusions. It’s pretty subjective and I imagine in practice the exceptions granted are mostly religious in nature, but there’s definitely a level of acceptance past which the DSM wouldn’t consider having a tulpa to be a disorder at all.
Furthermore, hallucinations are neither necessary or sufficient for a diagnosis of schizophrenia. Disorganized thought, “word salad”, and flat affect are just as important, and a major disruption to the patient’s life must also be demonstrated.
Well, if you insist, here goes:
(A non-bizarre delusion would be believing that your guru was raised from the dead, the exception for “culturally accepted response pattern” isn’t for tulpa hallucinations, it is so that someone who feels the presence of god in the church, hopefully without actually seeing a god hallucination, isn’t diagnosed.)
Here’s the criteria for e.g. 295.40 Schizophreniform Disorder:
One of the following criteria, if delusions are judged to be bizarre, or hallucinations consist of hearing one voice participating in a running commentary of the patient’s actions or of hearing two or more voices conversing with each other: Delusions, Hallucinations, (...)
Rule out of Schizoaffective or Mood Disorders
Disturbance not due to drugs, medication, or a general medical condition (e.g. delirium tremens)
Duration of an episode of the disorder (hallucinations) one to six months
Criteria for 298.80: Brief Psychotic Disorder
Presence of one (or more) of the following symptoms: hallucinations (...)
Duration between one day and one month
Hallucination not better accounted for by Schizoaffective Disorder, Mood Disorder With Psychotic Features, Schizophrenia
Criteria for 298.90: Psychotic Disorder NOS (Not Otherwise Specified):
Psychotic symptomatology (e.g. hallucinations) that do not meet the criteria for any specific Psychotic Disorder, Examples include persistent auditory hallucinations in the absence of any other features.
Where are the additional criteria for that? Wait, there are none!
In summary: You tell a professional about that “friend” you’re seeing and hearing, you either get 295.40 Schizophreniform Disorder or 298.80: Brief Psychotic Disorder depending on the time frame, or 298.90: Psychotic Disorder NOS (Not Otherwise Specified) in any case. Congratulations!
Fair enough, if I had an imaginary friend I wouldn’t want to report it to a shrink. I got hung up on technicalities and the point I should have been focusing on is whether entertaining one specific delusion is likely to result in other symptoms of schizophrenia that are more directly harmful.
See my take on that here.
Many people suffering from hearing voices etc. do realize those “aren’t real”, which doesn’t in itself enable them to turn them off. If I were confident that you can untrain hallucinations (and strictly speaking thus get rid of a psychotic disorder NOS just by choosing to do so), switch them off with little effort, I would find tulpas to be harmless.
Not knowing much of anything about the tulpa community, a priori I would expect that a significant fraction of “imaginary friends” are more of a vivid imagination type of phenomenon, and not an actual visual and auditory hallucination, which may be more of an embellishment for group-identification purposes.
That’s specifically the religion exemption, yes.
Isn’t this a failure mode with a catchy name?
I think implicit in that question was, ‘and how does it differ?’
A friend of mine has a joke in which he describes any arbitrary magic card (and later, things that weren’t magic cards) by explaining how it differed from an Ornithopter (Suq’Ata Lancer is just like an Ornithopter except it’s red instead of an artifact, and it has haste and flanking instead of flying, and it costs 2 and a red instead of 0, and it has 2 power instead of 0. Yup, just like an Ornithopter). The humor lay in the anti-compression—the descriptions were technically accurate, but rather harder to follow than they needed to be.
Eradicating the humor, you could alternately describe a Suq’Ata Lancer as a Gray Ogre with haste and flanking. The class of ‘cards better than Gray Ogre’ is a reference class that many magic players would be familiar with.
Trying to get a handle on the idea of the tulpa, it’s reasonable to ask where to start before you try comparing it to an ornithopter.
Why would “which reference class is x most like” be a “failure mode”? Don’t just word-match to the closest post including the phrase “reference class” which you remember.
When you’re in a dark alley, and someone pulls a gun and approaches you, would it be a “failure mode” to ask yourself what reference class most closely matches the situation, then conclude you’re probably getting mugged?
Saying “uFAI is like Terminator!”—“No, it’s like Matrix!” would be reference class tennis, “which reference class is uFAI most like?” wouldn’t be.
I think the term is “reference class tennis”.
Have you read diseased thinking: dissolving questions about disease, by any chance?
No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
Wrong I was in calling
Spirits, I avow,
For I find them galling,
Cannot rule them now.
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)
This also sounds like an argument against IFS. I don’t think it holds water. Accessing the same data as you do but using a different algorithm to process it seems valuable. (This is under the assumption that tulpas work at all.)
The benefits from analytically shifting your point of view, or from using different approaches in different situations certainly don’t necessitate actually hallucinating people talking to you. (Hint: Only the latter finds its way to being a symptom for various psych disorders.)
“You need to hallucinate voices / people to get the benefit of viewing a situation from different angles” is not an accurate inference from my argument, nor a fair description of IFS, which as far as I know doesn’t include sensory hallucinations.
Source?
I mean, there are, as you say, obvious “right habits” analogs of this that get results—which would seem to invalidate the first quoted sentence—but I don’t see why pushing it “further” couldn’t possibly generate better results.
Tulpas and other such experiences seem plausible given how prone we are to hallucinating things anyway (see intense religious experiences for example), and I wouldn’t be surprised if some people would be able to create them consciously. However I doubt that most people can do this. The regulars of /r/tulpas are probably not very representative of the population at large, whether through their unusual proficiency with mental imagery or some deeper eccentricity.
Creating a tulpa in order to develop skills faster or become more productive might work, but the question is whether the gains weighted by probability of success are higher than other, more conventional (and indeed, mentally healthy) methods. I think not.
I am reminded of an occult practice I have heard of called evoking or assuming a godform, in which one temporarily assumes the role of a ‘god’ - a personification of some aspect of humanity which is conceived of as having infinite capability in some sphere of activity, often taken from an ancient pantheon to give it personality and depth. With your mind temporarily working in that framework, it ‘rubs off’ on your everyday activities and you sometimes stop limiting yourself and do things that you wouldnt do before in that sphere of endeavor.
It looks like people trying to intentionally produce personifications with similarities to all sorts of archetypes and minor deities that people have dealt with across history. People have been doing this as long as there have been people, just normally by invoking personifications and archetypes from their culture, not trying to create their own. The saner strands of modern neopagans and occultists acknowledge that these archetypes only exist in the mind but make the point that they have effects in the real world through human action, especially when they are in the minds of many people. You also don’t need to hallucinate to use an archetype as a focus for thought about a matter (example: “what would Jesus do?”), and trying to actually get one strong enough to hallucinate during normal consciousness (as opposed to say, dreaming) seems unhealthy.
I can, though, relay an interesting experience I had in unintentionally constructing some kind of similar mental archetype while dreaming that kind of stuck around in my mind for a while. I didn’t reach into any pantheon though, my mind reached to a mythology which has had its claws in my psyche since childhood—star trek. Q is always trolling the crew of the Enterprise for humanity’s benefit, in attempts to get them to meet their potential and progress in understanding or test them. He was there, and let’s just say I was thoroughly trolled in a dream, in ways that emphasized certain capabilities of mine that I was not using. And just before waking up he specifically told me that he would be watching me with my own eyes since he was actully part of me that normally didn’t speak. That sense of part of me watching and making sure I actually did what I was capable of stuck around for over a week.
Of course, of course—whatever helps you sleep at night.
On the topic of religious experiences, I found this bit from the linked tulpa FAQ very interesting:
That sounds quite strongly like some believers’ experience of being able to talk to God and hearing Him answer back would be a manifestation of the same phenomenon. A while back, gwern was pasting excerpts from a book which talked about religious communities where the ability to talk with God was considered a skill that you needed to hone with regular practice. That sounds strongly reminiscent of this: talk to God long enough, and eventually you’ll get back an answer—from an emulated mind that aligns itself with the preconceived traits you give it.
I browsed around the tulpa community some more, and found some mentions of “servitors”, which have the same mental recall abilities (and apparently better access to current information—some people there claim to have made “status bars” projected on top of their vision), but the community doesn’t consider them sentient. This forum has had several conversations about them. The people there tend to (badly) apply AI ideas to servitors, but that might just be an aesthetic choice.
This would probably be a better munchkin option, since it has most of the same usefulness as a tulpa, but much less likely to be sentient. Supposedly they have a tendency to become able to pass the turing test by accident, which is a little worrying, but that could be the human tendency to personify everything.
In general, what I’m taking away from this is that intense visualizing can have really weird results, including hallucinations, and conscious access to information that’s usually hidden from you. I don’t have a high degree of certainty about that, though, because of the source.
I asked the subreddit about possible practical uses of tulpas, and was told that
That sounds like a very practical use to me. Many people are lonely. (I remember reading one thing where wasn’t there a guy making a tulpa of MLP’s Twilight Sparkle?)
You may be thinking of this.
No, it wasn’t a video (I shun videos), but I’m reading through /r/Tulpas and apparently they acknowledge it’s a really common thing for tulpa-enthusiasts (‘tulpists’? is there a word for them yet?) to make ponies: http://www.reddit.com/r/Tulpas/comments/14zbli/the_internet_is_laughing_at_us_and_you_shouldnt/c7hy6mk So I guess it could have been any of a lot of people.
EDIT: I find the religious connection very interesting since it reminds me of the Christian practices I’ve read about before, so I’ve asked them about it: http://www.reddit.com/r/Tulpas/comments/1e33z2/comparison_with_charismatic_christian_practices/
Ask them if they’re utilitarians.
If they say yes, suggest that according to some versions of utilitarianism they may be ethically obligated to mass produce tulpas until they run out of space in their heads.
By the same logic, you should mass produce children until you can no longer feed them all.
Islam, Catholocism and others approve, though they’re vague about what happens once you run out of space or can no longer feed them. Sharp tongues may claim that has already happened.
Except the Tulpa’s apparently don’t require additional food and resources, however children are notoriously demanding of food.
I didn’t say I was a total utilitarian, though. But someone who accepts the repugnant conclusion probably should act this way.
Raising children is expensive. There are cheaper ways to increase the population.
Ok, but then it’s no longer “the same logic.” Tulpas are free!
That is not free.
This seems like a non sequitur.
Anyway, creating tulpas is presumably much cheaper than raising an actual child, for anyone. So once the low hanging fruit in donating money to a charity that increases actual population or whatever, creating tulpas will be a much more efficient way of increasing the population, assuming they ‘count’ in the utility function separately and everything.
Or even better, do sperm donation. You’re out maybe a few score hours at worst, for the chance of getting scores to hundreds (yes, really) of children. Compare that to a tulpa, where the guides on Reddit are estimating something like 100 hours to build up a reasonable tulpa, or raising a kid yourself (thousands of hours?).
But someone still has to raise the kid at some point, and besides, not everyone can make sperm.
I’m not sure that sperm banks have an oversupply; apparently England has something of a shortage due to its questionable decision to ban anonymous donation, which is why our David Gerard reports back that it was very easy to do even though he’s old enough he wouldn’t even be considered in the USA as far as I can tell.
It’s possible to donate eggs, though it’s not nearly as much fun.
Not everyone is fertile. I can’t make either, currently.
But my point is that someone still has to take the cost of raising the child. So a utilitarian might try to convince more people to make tulpas instead of making more babies.
They wouldn’t otherwise be working to increase the population, so the cost is negligible.
But someone can. Pay them to do it.
I just said there are cheaper ways to increase the population. You have to compare it to them. How does it compare to sperm donation? Saving lives?
I don’t think additional sperm donors will increase the population—I don’t think lack of donors is the bottleneck.
Saving lives probably doesn’t either, if the demographic transition model is true. At least, saving child lives probably results in lower birthrates—perhaps saving adults doesn’t affect birthrate.
Depends on the country.
I’m told there are areas where it’s illegal to get paid to “donate” sperm. I think it’s a bottleneck there.
Relevant to this topic: Keith Johnstone’s ‘Masks’. It would be better to read the relevant section in his book “Impro” for the whole story (I got it at my university library) but this collection of quotes followed by this video should give enough of an introduction.
The idea is that while the people wear these masks, they are able to become a character with a personality different from the actor’s original. The actor doesn’t feel as if they are controlling the character. That being said, it doesn’t happen immediately: It can take a few sessions for the actor to get the feel for the thing. The other thing is that the Masks usually have to learn to talk (albeit at an advanced pace) eventually taking on the vocabulary of their host. It’s very interesting reading, to say the least.
I can’t imagine that your ROI would be positive though.