This is fascinating. I’m rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa’s memories aren’t being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don’t want to risk permanently decreasing my intelligence.
I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person.
So, “Votes for tulpas” then! How many of them can you create inside one head?
The next stage would be “Vote for tulpas!”.
Getting a tulpa elected as president using the votes of other tulpas would be a real munchkin coup...
You should get one of the occult enthusiasts to check if Tulpas leave ghosts ;)
More seriously, I suspect the brain is already capable of this sort of thing—dreams, for example—even if it’s usually running in the background being your model of the world or somesuch.
It’s a waste of time at best, and inducing psychosis at worst. (Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
What’s next, using magic mushrooms to speed the process? Yes, you can probably teach yourself to become actually insane, but why would you?
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
Sounds like the noncentral fallacy. That you are somewhat in control, and that the tulpa will leave you alone (at least temporarily) if asked, seem like relevant differences from the more central cases of mental illness.
It would be if I was saying we should ignore the similarity to mental illness altogether. I’m just saying it’s different enough from typical cases to warrant closer examination.
Well, “getting advice from / interacting with a hallucinated person with his own personality” certainly fits the “I hallucinate voices telling me to do something” template much better than “not getting advice from / not interacting with a hallucinated person with his own personality”, no?
There is no way that hallucinated persons talking to you are classified other than as part of a mental illness, other than if brought on by e.g. drug use. The DSM IV offers no exceptions for the “tulpa” community …
Yes, but the operative question here isn’t whether it’s mental illness, it’s whether it’s beneficial. Similarity to harmful mental illnesses is a reason to be really careful (having a very low prior probability of anything that fits the “mental illness” category being a good thing), but it’s not a knockdown argument.
If we accept psychology’s rule that a mental trait is only an illness if it interferes with your life (meaning moderate to large negative effect on a person’s life, as I understand it), then something being a mental illness is a knockdown argument that it is not beneficial. But in that case, you have to prove that the thing has a negative affect on the person’s life before you can know that is a mental illness. (See also http://lesswrong.com/lw/nf/the_parable_of_hemlock/.)
There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
There’s literally less of your brain under your volition.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
This tulpa stuff resembles mental illness.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
if its self inflicted, for example standard treatments might not work.
If you had looked into the topic, you would know the process is reversible.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’saccount of creating one. All very interesting-if-true, but...
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
If you had looked into the topic, you would know the process is reversible.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment).
So please provide evidence that this feature is shared by the thing under discussion, yeah?
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness.
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)
I’m assuming that a rationalist who made tulpas would be aware that they weren’t really separate people (since a lot of people in the tulpa community say they don’t think they’re separate people, being able to see them probably doesn’t require thinking they’re separate from yourself), so it wouldn’t require having false beliefs or beliefs in beliefs in the way that religion would.
If adopting a religion really is the instrumentally best course of action… why not? But for a consequentialist who values truth for its own sake, or would be hindered by being confused about their beliefs, religion actually wouldn’t be a net benefit.
One can adopt a religion in many ways. My comment’s siblings warn against adopting a religion’s dogma, but my comment’s parent suggests adopting a religion’s practices. (There are other ways, too, like religious identity.) Traditionally, one adopts all of these as a package, but that’s not necessary.
You don’t classify each type of .e.g voice hallucinated with schizophrenia. You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
I certainly cannot prove that there are no situations in which hallucinating imaginary people giving you advice would not be net beneficial, in fact, there certainly are situations in which any given potential mental illness may be beneficial. There have been studies about certain potential mental illnesses being predominant (or at least overrepresented) in certain professions, sometimes to the professional’s benefit (also: taking cocaine may be beneficial. Certain tulpas may be beneficial.).
Who knows, maybe an unknown grand-uncle will leave a fortune to you, predicated on you being a drug-addict. In which case being a drug-addict would have been beneficial.
People dabble in alcohol to get a social edge, they usually refrain from heroin. Which reference class is a tulpa most like?
You can put a “Your Mileage May Vary” disclaimer to any advice, but actually hallucinating persons who then interact with you seems like it should belong in the DSM (where it is) way more than it should belong in a self-help guide.
Maybe when plenty of people have used tulpas for decades, and a representative sample of them can be used to prove their safety, there will be enough evidence to switch the reference class, to introduce a special case in the form of “hallucinations are a common symptom of schizophrenia, except tulpas”. Until then, the default case would be using the reference class of “effects of hallucinating people”, which is presumed harmful unless shown to be otherwise.
Maybe when plenty of people have used tulpas for decades
Never happen if no-one tries. I agree that it looks dangerous, but this is the ridiculous munchkin ideas thread, not the boring advice or low-hanging fruit threads.
Yesterday, upon the stair, I met a man who wasn’t there He wasn’t there again today I wish, I wish he’d go away...
You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
If you’re going to define schizophrenia as voices that are bad for the person, then that would mean that it’s only for people who dislike their voices (and are not deluded about whether the voices are a net benefit).
Voices threatening to kill you if you don’t achieve your goals also doesn’t seem like a good example of a net benefit—that would cause a lot of stress, so it might not actually be beneficial. It’s also not typical behavior for tulpas, based on the conversations in the tulpa subreddit. Voices that annoy you when you don’t work or try to influence your behavior with (simulated?) social pressure would probably be more typical.
Anyway… I’m trying to figure out where exactly we disagree. After thinking about it, I think I “downvote” mental disorders for being in the “bad for you” category rather than the “abnormal mental things” category, and the “mental disorder” category is more like a big warning sign to check how bad it is for people. Tulpas look like something to be really, really careful about because they’re in the “abnormal mental things” category (and also the “not well understood yet” category), but the people on the tulpa subreddit don’t seem unhappy or frustrated, so I haven’t added many “bad for you” downvotes.
I’ve also got some evidence indicating that they’re at least not horrible:
People who have tulpas say they think it’s a good thing
People who have tulpas aren’t saying really worrying things (like suggesting they’re a good replacement for having friends)
The process is somewhat under the control of the “host”—progressing from knowing what the tulpa would say to auditory hallucinations to visual ones seems to take a lot of effort for most people
No one is reporting having trouble telling the tulpa apart from a real person or non-mental voices (one of the problematic features of schizophrenia is that the hallucinations can’t be differentiated from reality)
I’ve already experienced some phenomena similar to this, and they haven’t really affected my wellbeing either way. (You know how writes talk about characters “taking off a life of their own”, so writing dialog feels more like taking dictation and the characters might refuse to go along with a pre-planned plot? I’ve had some of this. I’ve also (very rarely) had characters spontaneously “comment” on what I’m doing or reading.)
This doesn’t add up to enough to make me anywhere near certain—I’m still very suspicious about this being safe, and it seems like it would have to be taking up some of your cognitive resources. But it might be worth investigating (mainly the non-hallucination parts—being able to see the tulpa doesn’t seem that useful), since human brains are better at thinking about people than most other things.
Actually, the DSM does have an exception for “culturally accepted” or “non-bizarre” delusions. It’s pretty subjective and I imagine in practice the exceptions granted are mostly religious in nature, but there’s definitely a level of acceptance past which the DSM wouldn’t consider having a tulpa to be a disorder at all.
Furthermore, hallucinations are neither necessary or sufficient for a diagnosis of schizophrenia. Disorganized thought, “word salad”, and flat affect are just as important, and a major disruption to the patient’s life must also be demonstrated.
(A non-bizarre delusion would be believing that your guru was raised from the dead, the exception for “culturally accepted response pattern” isn’t for tulpa hallucinations, it is so that someone who feels the presence of god in the church, hopefully without actually seeing a god hallucination, isn’t diagnosed.)
Here’s the criteria for e.g. 295.40 Schizophreniform Disorder:
One of the following criteria, if delusions are judged to be bizarre, or hallucinations consist of hearing one voice participating in a running commentary of the patient’s actions or of hearing two or more voices conversing with each other: Delusions, Hallucinations, (...)
Rule out of Schizoaffective or Mood Disorders
Disturbance not due to drugs, medication, or a general medical condition (e.g. delirium tremens)
Duration of an episode of the disorder (hallucinations) one to six months
Criteria for 298.80: Brief Psychotic Disorder
Presence of one (or more) of the following symptoms: hallucinations (...)
Duration between one day and one month
Hallucination not better accounted for by Schizoaffective Disorder, Mood Disorder With Psychotic Features, Schizophrenia
Criteria for 298.90: Psychotic Disorder NOS (Not Otherwise Specified):
Psychotic symptomatology (e.g. hallucinations) that do not meet the criteria for any specific Psychotic Disorder, Examples include persistent auditory hallucinations in the absence of any other features.
Where are the additional criteria for that? Wait, there are none!
In summary: You tell a professional about that “friend” you’re seeing and hearing, you either get 295.40 Schizophreniform Disorder or 298.80: Brief Psychotic Disorder depending on the time frame, or 298.90: Psychotic Disorder NOS (Not Otherwise Specified) in any case. Congratulations!
Fair enough, if I had an imaginary friend I wouldn’t want to report it to a shrink. I got hung up on technicalities and the point I should have been focusing on is whether entertaining one specific delusion is likely to result in other symptoms of schizophrenia that are more directly harmful.
Many people suffering from hearing voices etc. do realize those “aren’t real”, which doesn’t in itself enable them to turn them off. If I were confident that you can untrain hallucinations (and strictly speaking thus get rid of a psychotic disorder NOS just by choosing to do so), switch them off with little effort, I would find tulpas to be harmless.
Not knowing much of anything about the tulpa community, a priori I would expect that a significant fraction of “imaginary friends” are more of a vivid imagination type of phenomenon, and not an actual visual and auditory hallucination, which may be more of an embellishment for group-identification purposes.
I think implicit in that question was, ‘and how does it differ?’
A friend of mine has a joke in which he describes any arbitrary magic card (and later, things that weren’t magic cards) by explaining how it differed from an Ornithopter (Suq’Ata Lancer is just like an Ornithopter except it’s red instead of an artifact, and it has haste and flanking instead of flying, and it costs 2 and a red instead of 0, and it has 2 power instead of 0. Yup, just like an Ornithopter). The humor lay in the anti-compression—the descriptions were technically accurate, but rather harder to follow than they needed to be.
Eradicating the humor, you could alternately describe a Suq’Ata Lancer as a Gray Ogre with haste and flanking. The class of ‘cards better than Gray Ogre’ is a reference class that many magic players would be familiar with.
Trying to get a handle on the idea of the tulpa, it’s reasonable to ask where to start before you try comparing it to an ornithopter.
Why would “which reference class is x most like” be a “failure mode”? Don’t just word-match to the closest post including the phrase “reference class” which you remember.
When you’re in a dark alley, and someone pulls a gun and approaches you, would it be a “failure mode” to ask yourself what reference class most closely matches the situation, then conclude you’re probably getting mugged?
Saying “uFAI is like Terminator!”—“No, it’s like Matrix!” would be reference class tennis, “which reference class is uFAI most like?” wouldn’t be.
No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)
Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex.
This also sounds like an argument against IFS. I don’t think it holds water. Accessing the same data as you do but using a different algorithm to process it seems valuable. (This is under the assumption that tulpas work at all.)
The benefits from analytically shifting your point of view, or from using different approaches in different situations certainly don’t necessitate actually hallucinating people talking to you. (Hint: Only the latter finds its way to being a symptom for various psych disorders.)
“You need to hallucinate voices / people to get the benefit of viewing a situation from different angles” is not an accurate inference from my argument, nor a fair description of IFS, which as far as I know doesn’t include sensory hallucinations.
(Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
Source?
I mean, there are, as you say, obvious “right habits” analogs of this that get results—which would seem to invalidate the first quoted sentence—but I don’t see why pushing it “further” couldn’t possibly generate better results.
This is fascinating. I’m rather surprised that people seem to be able to actually see their tulpa after a while. I do worry about the ethical implications though—with what we see with split brain patients, it seems plausible that a tulpa may actually be a separate person. Indeed, if this is true, and the tulpa’s memories aren’t being confabulated on the spot, it would suggest that the host would lose the use of the part of their brain that is running the tulpa, decreasing their intelligence. Which is a pity, because I really want to try this, but I don’t want to risk permanently decreasing my intelligence.
So, “Votes for tulpas” then! How many of them can you create inside one head?
The next stage would be “Vote for tulpas!”.
Getting a tulpa elected as president using the votes of other tulpas would be a real munchkin coup...
I’ve been wondering if the headaches people report while forming a tulpa are caused by spending more mental energy than normal.
You should get one of the occult enthusiasts to check if Tulpas leave ghosts ;)
More seriously, I suspect the brain is already capable of this sort of thing—dreams, for example—even if it’s usually running in the background being your model of the world or somesuch.
It’s a waste of time at best, and inducing psychosis at worst. (Waste of time because the “tulpa”—your hallucination—has access to the same data repository you use, and doesn’t run on a different frontal cortex. You can teach yourself the right habits without also teaching yourself to become mentally ill.)
You know what it’s called when you hear voices giving you “advice”? Paranoid schizophrenia. Outright visual hallucinations?
What’s next, using magic mushrooms to speed the process? Yes, you can probably teach yourself to become actually insane, but why would you?
Sounds like the noncentral fallacy. That you are somewhat in control, and that the tulpa will leave you alone (at least temporarily) if asked, seem like relevant differences from the more central cases of mental illness.
Your reply sounds like special pleading using the fallacy fallacy. Of course you can induce mental illness in yourself if you try hard enough.
It would be if I was saying we should ignore the similarity to mental illness altogether. I’m just saying it’s different enough from typical cases to warrant closer examination.
Well, “getting advice from / interacting with a hallucinated person with his own personality” certainly fits the “I hallucinate voices telling me to do something” template much better than “not getting advice from / not interacting with a hallucinated person with his own personality”, no?
There is no way that hallucinated persons talking to you are classified other than as part of a mental illness, other than if brought on by e.g. drug use. The DSM IV offers no exceptions for the “tulpa” community …
Yes, but the operative question here isn’t whether it’s mental illness, it’s whether it’s beneficial. Similarity to harmful mental illnesses is a reason to be really careful (having a very low prior probability of anything that fits the “mental illness” category being a good thing), but it’s not a knockdown argument.
If we accept psychology’s rule that a mental trait is only an illness if it interferes with your life (meaning moderate to large negative effect on a person’s life, as I understand it), then something being a mental illness is a knockdown argument that it is not beneficial. But in that case, you have to prove that the thing has a negative affect on the person’s life before you can know that is a mental illness. (See also http://lesswrong.com/lw/nf/the_parable_of_hemlock/.)
There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
Could you expand on this, please? I’m not sure I’m familiar with the failure mode you seem to be pattern-matching to.
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’s account of creating one. All very interesting-if-true, but...
WARNING: POTENTIAL MEMETIC HAZARD
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
So please provide evidence that this feature is shared by the thing under discussion, yeah?
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)
This is the argument to adopt a religion even though you know it’s epistemically irrational.
You’re confusing hallucinations with delusions, I think.
I’m assuming that a rationalist who made tulpas would be aware that they weren’t really separate people (since a lot of people in the tulpa community say they don’t think they’re separate people, being able to see them probably doesn’t require thinking they’re separate from yourself), so it wouldn’t require having false beliefs or beliefs in beliefs in the way that religion would.
If adopting a religion really is the instrumentally best course of action… why not? But for a consequentialist who values truth for its own sake, or would be hindered by being confused about their beliefs, religion actually wouldn’t be a net benefit.
One can adopt a religion in many ways. My comment’s siblings warn against adopting a religion’s dogma, but my comment’s parent suggests adopting a religion’s practices. (There are other ways, too, like religious identity.) Traditionally, one adopts all of these as a package, but that’s not necessary.
You don’t classify each type of .e.g voice hallucinated with schizophrenia. You could for example apply your argument to say “well, is the voice threatening to kill you only if you don’t study for your test? If so, isn’t the net effect beneficial, and as such it’s not really a mental illness? If you like being motivated by your voices, you don’t suffer from schizophrenia, that’s only for people who dislike their voices.”
I certainly cannot prove that there are no situations in which hallucinating imaginary people giving you advice would not be net beneficial, in fact, there certainly are situations in which any given potential mental illness may be beneficial. There have been studies about certain potential mental illnesses being predominant (or at least overrepresented) in certain professions, sometimes to the professional’s benefit (also: taking cocaine may be beneficial. Certain tulpas may be beneficial.).
Who knows, maybe an unknown grand-uncle will leave a fortune to you, predicated on you being a drug-addict. In which case being a drug-addict would have been beneficial.
People dabble in alcohol to get a social edge, they usually refrain from heroin. Which reference class is a tulpa most like?
You can put a “Your Mileage May Vary” disclaimer to any advice, but actually hallucinating persons who then interact with you seems like it should belong in the DSM (where it is) way more than it should belong in a self-help guide.
Maybe when plenty of people have used tulpas for decades, and a representative sample of them can be used to prove their safety, there will be enough evidence to switch the reference class, to introduce a special case in the form of “hallucinations are a common symptom of schizophrenia, except tulpas”. Until then, the default case would be using the reference class of “effects of hallucinating people”, which is presumed harmful unless shown to be otherwise.
Never happen if no-one tries. I agree that it looks dangerous, but this is the ridiculous munchkin ideas thread, not the boring advice or low-hanging fruit threads.
Yesterday, upon the stair,
I met a man who wasn’t there
He wasn’t there again today
I wish, I wish he’d go away...
If you’re going to define schizophrenia as voices that are bad for the person, then that would mean that it’s only for people who dislike their voices (and are not deluded about whether the voices are a net benefit).
Voices threatening to kill you if you don’t achieve your goals also doesn’t seem like a good example of a net benefit—that would cause a lot of stress, so it might not actually be beneficial. It’s also not typical behavior for tulpas, based on the conversations in the tulpa subreddit. Voices that annoy you when you don’t work or try to influence your behavior with (simulated?) social pressure would probably be more typical.
Anyway… I’m trying to figure out where exactly we disagree. After thinking about it, I think I “downvote” mental disorders for being in the “bad for you” category rather than the “abnormal mental things” category, and the “mental disorder” category is more like a big warning sign to check how bad it is for people. Tulpas look like something to be really, really careful about because they’re in the “abnormal mental things” category (and also the “not well understood yet” category), but the people on the tulpa subreddit don’t seem unhappy or frustrated, so I haven’t added many “bad for you” downvotes.
I’ve also got some evidence indicating that they’re at least not horrible:
People who have tulpas say they think it’s a good thing
People who have tulpas aren’t saying really worrying things (like suggesting they’re a good replacement for having friends)
The process is somewhat under the control of the “host”—progressing from knowing what the tulpa would say to auditory hallucinations to visual ones seems to take a lot of effort for most people
No one is reporting having trouble telling the tulpa apart from a real person or non-mental voices (one of the problematic features of schizophrenia is that the hallucinations can’t be differentiated from reality)
I’ve already experienced some phenomena similar to this, and they haven’t really affected my wellbeing either way. (You know how writes talk about characters “taking off a life of their own”, so writing dialog feels more like taking dictation and the characters might refuse to go along with a pre-planned plot? I’ve had some of this. I’ve also (very rarely) had characters spontaneously “comment” on what I’m doing or reading.)
This doesn’t add up to enough to make me anywhere near certain—I’m still very suspicious about this being safe, and it seems like it would have to be taking up some of your cognitive resources. But it might be worth investigating (mainly the non-hallucination parts—being able to see the tulpa doesn’t seem that useful), since human brains are better at thinking about people than most other things.
Actually, the DSM does have an exception for “culturally accepted” or “non-bizarre” delusions. It’s pretty subjective and I imagine in practice the exceptions granted are mostly religious in nature, but there’s definitely a level of acceptance past which the DSM wouldn’t consider having a tulpa to be a disorder at all.
Furthermore, hallucinations are neither necessary or sufficient for a diagnosis of schizophrenia. Disorganized thought, “word salad”, and flat affect are just as important, and a major disruption to the patient’s life must also be demonstrated.
Well, if you insist, here goes:
(A non-bizarre delusion would be believing that your guru was raised from the dead, the exception for “culturally accepted response pattern” isn’t for tulpa hallucinations, it is so that someone who feels the presence of god in the church, hopefully without actually seeing a god hallucination, isn’t diagnosed.)
Here’s the criteria for e.g. 295.40 Schizophreniform Disorder:
One of the following criteria, if delusions are judged to be bizarre, or hallucinations consist of hearing one voice participating in a running commentary of the patient’s actions or of hearing two or more voices conversing with each other: Delusions, Hallucinations, (...)
Rule out of Schizoaffective or Mood Disorders
Disturbance not due to drugs, medication, or a general medical condition (e.g. delirium tremens)
Duration of an episode of the disorder (hallucinations) one to six months
Criteria for 298.80: Brief Psychotic Disorder
Presence of one (or more) of the following symptoms: hallucinations (...)
Duration between one day and one month
Hallucination not better accounted for by Schizoaffective Disorder, Mood Disorder With Psychotic Features, Schizophrenia
Criteria for 298.90: Psychotic Disorder NOS (Not Otherwise Specified):
Psychotic symptomatology (e.g. hallucinations) that do not meet the criteria for any specific Psychotic Disorder, Examples include persistent auditory hallucinations in the absence of any other features.
Where are the additional criteria for that? Wait, there are none!
In summary: You tell a professional about that “friend” you’re seeing and hearing, you either get 295.40 Schizophreniform Disorder or 298.80: Brief Psychotic Disorder depending on the time frame, or 298.90: Psychotic Disorder NOS (Not Otherwise Specified) in any case. Congratulations!
Fair enough, if I had an imaginary friend I wouldn’t want to report it to a shrink. I got hung up on technicalities and the point I should have been focusing on is whether entertaining one specific delusion is likely to result in other symptoms of schizophrenia that are more directly harmful.
See my take on that here.
Many people suffering from hearing voices etc. do realize those “aren’t real”, which doesn’t in itself enable them to turn them off. If I were confident that you can untrain hallucinations (and strictly speaking thus get rid of a psychotic disorder NOS just by choosing to do so), switch them off with little effort, I would find tulpas to be harmless.
Not knowing much of anything about the tulpa community, a priori I would expect that a significant fraction of “imaginary friends” are more of a vivid imagination type of phenomenon, and not an actual visual and auditory hallucination, which may be more of an embellishment for group-identification purposes.
That’s specifically the religion exemption, yes.
Isn’t this a failure mode with a catchy name?
I think implicit in that question was, ‘and how does it differ?’
A friend of mine has a joke in which he describes any arbitrary magic card (and later, things that weren’t magic cards) by explaining how it differed from an Ornithopter (Suq’Ata Lancer is just like an Ornithopter except it’s red instead of an artifact, and it has haste and flanking instead of flying, and it costs 2 and a red instead of 0, and it has 2 power instead of 0. Yup, just like an Ornithopter). The humor lay in the anti-compression—the descriptions were technically accurate, but rather harder to follow than they needed to be.
Eradicating the humor, you could alternately describe a Suq’Ata Lancer as a Gray Ogre with haste and flanking. The class of ‘cards better than Gray Ogre’ is a reference class that many magic players would be familiar with.
Trying to get a handle on the idea of the tulpa, it’s reasonable to ask where to start before you try comparing it to an ornithopter.
Why would “which reference class is x most like” be a “failure mode”? Don’t just word-match to the closest post including the phrase “reference class” which you remember.
When you’re in a dark alley, and someone pulls a gun and approaches you, would it be a “failure mode” to ask yourself what reference class most closely matches the situation, then conclude you’re probably getting mugged?
Saying “uFAI is like Terminator!”—“No, it’s like Matrix!” would be reference class tennis, “which reference class is uFAI most like?” wouldn’t be.
I think the term is “reference class tennis”.
Have you read diseased thinking: dissolving questions about disease, by any chance?
No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
Wrong I was in calling
Spirits, I avow,
For I find them galling,
Cannot rule them now.
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)
This also sounds like an argument against IFS. I don’t think it holds water. Accessing the same data as you do but using a different algorithm to process it seems valuable. (This is under the assumption that tulpas work at all.)
The benefits from analytically shifting your point of view, or from using different approaches in different situations certainly don’t necessitate actually hallucinating people talking to you. (Hint: Only the latter finds its way to being a symptom for various psych disorders.)
“You need to hallucinate voices / people to get the benefit of viewing a situation from different angles” is not an accurate inference from my argument, nor a fair description of IFS, which as far as I know doesn’t include sensory hallucinations.
Source?
I mean, there are, as you say, obvious “right habits” analogs of this that get results—which would seem to invalidate the first quoted sentence—but I don’t see why pushing it “further” couldn’t possibly generate better results.