No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)
No, but skimming it the content seems common-sensical enough. It doesn’t dissolve the correlation with “generally being harmful”.
It’s not a “fits the criteria of a psychological disease, case closed” kind of thing, but pattern matching to schizophrenia certainly seems to be evidence of being potentially harmful more than not, don’t you agree?
Similar to if I sent you a “P=NP proof” titled document atrociously typeset in MS Word, you could use pattern matching to suspect there’s something other than a valid P=NP proof contained even without seeing the actual contents of that specific proof.
I agree it’s sensible to be somewhat wary of inducing hallucinations, but you’re talking with a level of confidence in the hypothesis that it will in fact harm you to induce hallucinations in this particular way that I don’t think is merited by what you know about tulpas. Do you have an actual causal model that describes how this harm might come about?
(There often is no need for an actual causal model to strongly believe in an effect, correlation is sufficient. Some of the most commonly used pharmaceutical substances had/still have an unknown causal mechanism for their effect. Still, I do have one in this case:)
You are teaching your brain to create false sensory inputs, and to assign agency to those false inputs where non is there.
Once you’ve broken down those barriers and overcome your brain’s inside-outside classifier—training which may be in part innate and in part established in your earliest infancy (“If I feel this, then there is something touching my left hand”) - there is no reason the “advice” / interaction cannot turn harmful or malicious, that the voices cannot become threatening.
I find it plausible that the sort of people who can train themselves to actually see imaginary people (probably a minority even in the tulpa community) already had a predisposition towards schizophrenia, and have the bad fortune to trigger it themselves. Or that late-onset schizophrenia individuals mislabel themselves and enter the tulpa community. What’s the harm:
Even if beneficial at first, there is no easy treatment or “reprogramming” to reestablish the mapping of what’s “inside”, part of yourself, and “outside”, part of an external world. Many schizophrenics know the voices “aren’t real”. Doesn’t help them in re-raising the walls. Indeed, there often is a progression with schizophrenics, of hearing one voice, to hearing more voices, to e.g. “others can read my thoughts”.
As a tulpa-ist, you’ve already dissociated part of yourself and assigned it to the environment. Let me iterate I am not concerned with you having an “inner Kawoomba” you model, but with actually seeing / hearing such a person. Will you suddenly find yourself with more than one hallucinated person walking around with you? Maybe someone you start to argue with? You can’t turn off?
Slippery slope arguments (even for short slopes) aren’t perfectly convincing, but I just see the potential harm weighed against the potential benefit (in my estimation low, you can teach yourself to analytically shift your perspective without hacking your sensory input) as very one sided. If tulpas conferred a doubled life-span, my conclusion would be different …
If you’re familiar with the Sorceror’s Apprentice:
Wrong I was in calling
Spirits, I avow,
For I find them galling,
Cannot rule them now.
This is a lot stronger and better of an argument than trying to argue from DSM definitions. Be cautious about imposing mental states that can affect your decision-making is a good general rule, and yet tons of people happily drink, take drugs, and meditate. You can say each and all of these things have risks but people don’t normally say you shouldn’t drink because it makes you act like you have lower IQ or someone who’s got a motor control problem in their brain.
Well, that’s why I don’t take alcohol. (But agreed, people don’t normally say that. And I also agree that Kawoomba seems to be overstating the danger of tulpas.)