There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
There’s literally less of your brain under your volition.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
This tulpa stuff resembles mental illness.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
if its self inflicted, for example standard treatments might not work.
If you had looked into the topic, you would know the process is reversible.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’saccount of creating one. All very interesting-if-true, but...
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
If you had looked into the topic, you would know the process is reversible.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment).
So please provide evidence that this feature is shared by the thing under discussion, yeah?
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness.
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)
There’s only that much brain to go around with, the brain, being for the most part a larger version of australopithecus brain, as it is can have trouble seeing itself as a whole (just look at that “akrasia” posts where you can see people’s talkative parts of the brain disown the decisions made by the decision-making parts). Why do you expect anything but detrimental effects from deepening the failure of the brain to work as a whole?
Could you expand on this, please? I’m not sure I’m familiar with the failure mode you seem to be pattern-matching to.
The point is that when someone “hears voices”—which do not respond to the will in the same way in which internal monologues do, there’s no demons, there’s no new brain added. It is existing brain regions involved in the internal monologue failing to integrate properly with the rest. Less dramatically, when people claim they e.g. want to get on a diet but are mysteriously unable to—their actions do not respond to what they think is their will but instead respond to what they think is not their will—it’s the regions which make decisions about food intake not integrating with the regions that do the talking (Proper integration either results in the diet or absence of the belief that one wants to be on a diet). The bottom line is, brain is not a single CPU of some kind. It is a distributed system parts of which are capable of being in conflict, to the detriment of the well being of the whole.
So … you’re worried this might increase akrasia? I guess I can see how they might be in the same category, but I don’t think the same downsides apply. Do they?
The point with akrasia was to illustrate that more than 1 volition inside 1 head isn’t even rare here to begin with. The actual issue is that, of course, you aren’t creating some demon out of nothing. You are re-purposing existing part of your brain, involved in the internal monologue or even mental visualization as well, making this part not integrate properly with the rest under one volition. There’s literally less of your brain under your volition.
This topic is extremely retarded. This tulpa stuff resembles mental illness. Now, you wanna show off your “rationality” according to local rules of showing off your rationality, by rejecting the simple looking argument that it should be avoided like mental illness is. “Of course” it’s pattern matching, “non central fallacy” and other labels that you were taught here to give to equally Bayesian reasoning when it arrives at conclusions you don’t like. Here’s the thing: Yeah, it is in some technical sense not mental illness. It most closely resembles one. And it is as likely to be worse as it is likely to be better*, and it’s expected badness is equal to that of mental illness, and the standard line of reasoning is going to approximate utility maximization much better than this highly biased reasoning where if it is not like mental illness it must be better than mental illness, or worse, depending to which arguments pop into your head easier. In good ol caveman days, people with this reasoning fallacy, they would eat a mushroom, get awfully sick, and then eat another mushroom that looks quite similar to the first, but is a different mushroom of course, in the sense that it’s not the exact same physical mushroom body, and get awfully sick, and then do it again, and die.
Let’s suppose it was self inflicted involuntary convulsion fits, just to make an example where you’d not feel so much like demonstrating some sort of open mindness. Now the closest thing would have been real convulsion fits, and absent other reliable evidence either way expected badness of self inflicted convulsion fits would clearly be equal.
Also, by the way, what ever mental state you arrive at by creating a tulpa, is unlikely to be a mental state not achievable by one or the other illness.
if its self inflicted, for example standard treatments might not work.
Well, yeah. The primary worry among tulpa creators is that it might get pissed at you and follow you around the house making faces.
And what, pray tell, is the salient feature of mental illness that causes us to avoid it? Because I don’t think it’s the fact that we refer to them with the collection of syllables “men-tal-il-nes”.
Wow.
EDIT: OK, I should probably respond to that properly. Analogies are only useful when we don’t have better information about something’s effects. Bam, responded.
“Convulsion fits” are, I understand, painful and dangerous. Something like alien hand syndrome seems more analogous, but unfortunately I can’t really think of any benefits it might have, so naturally the expected utility comes out negative.
Could well be. Illnesses are capable of having beneficial side-effects, just by chance, although obviously it’s easier to break things than improve them with random interference.
If you had looked into the topic, you would know the process is reversible.
Are we sure there even is a process? The Reddit discussions are fascinating, but how credible are they? Likewise Alexandra David-Néel’s account of creating one. All very interesting-if-true, but...
WARNING: POTENTIAL MEMETIC HAZARD
I’ve kinda been avoiding this due to the potential correlation between my magickal experimentation in my teens/twenties and my later-life mental health difficulties, but I feel like people are wandering all over the place already, and I’d at least like to provide a few guideposts.
Yes, there are processes. Or at least, there are various things that are roughly like processes, although very few of them are formalized (if you want formalization, look to Crowley). Rather than provide yet another anecdotal account, let me lay out some of the observations I made during my own experimentation. My explicit goal when experimenting was to attempt to map various wacky “occult” or “pseudoscientific” theories to a modern understanding of neuroscience, and thus explain away as much of the Woo as possible. My hope was that what was left would provide a reasonable guide to “hacking my wetware”.
When you’re doing occult procedures, what (I think, @p > 0.7) you’re essentially doing is performing code injection attacks on your own brain. Note that while the brain is a neural network rather than a serial von Neumann-type (or Turing-type) machine, many neural networks tend to converge towards emulating finite state machines, which can be modeled as von Neumann-type machines—so it’s not implausible (@p ~= 0.85) that processes analagous to code injection attacks might work.
The specific area of the brain that seems to be targeted by the rituals that create a tulpa are the right inferior parietal lobe and the temporoparietal junction—which seem to play a key role in maintaining one’s sense-of-self / sense-of-agency / sense-of-ownership (i.e., the illusion that there is an “I” and that that “I” is what is calling the shots when the mind makes a decision or the body performs an action), as well as the area of the inferior parietal cortex and postcentral gyrus that participate in so-called “mirror neuron” processes. You’ll note that Crowley, for example, goes through at great length describing rather brutal initiatory ordeals designed specifically to degrade the practitioner’s sense-of-self—Crowley’s specific method was tabooing the word ‘I’, and slashing his own thumb with a razor whenever he slipped.
NOTE: Tabooing “I” is a VERY POWERFUL technique, and unlocks a slew of potential mindhacks, but (to stretch our software metaphor to the breaking point) you’re basically crashing one of your more important pieces of firewall software so you can do it. ARE YOU SURE THAT’S WHAT YOU WANT TO BE DOING? You literally have no idea how many little things constantly assault the ego / sense of self-worth every minute that you don’t even register because your “I” protects you. A good deal of Crowley’s (or any good initiatory Master’s) training involves preparing you to protect yourself once you take that firewall down—older works will couch that as “warding you against evil spirits” or whatever, but ultimately what we’re talking about is the terrifying and relentless psychological onslaught that is raw, unfiltered reality (or, to be more accurate, “rawer, less-filtered reality”).
3A) ARE YOU SURE THAT IS WHAT YOU WANT TO DO TO YOUR BRAIN?
Once your “I” crashes, you can start your injection attacks. Basically, while the “I” is rebooting, you want to slip stuff into your sensory stream that will disrupt the rebooting process enough to spawn two seperate “I” processes—essentially, you need to confuse your brain into thinking that it needs to spawn a second “I” while the first one is still running, confuse each “I” into not noticing that the other one is actually running on the same hardware, and then load a bunch of bogus metadata into one of the “I”s so that it develops a separate personality and set of motivations.
Luckily, this is easier than it sounds, because your brain is already used to doing exactly this up in the prefrontal cortex—this is the origin of all that BS “right brain” / “left brain” talk that came from those fascinating epilepsy studies where they severed people’s corpus colossa. See, you actually have two separate “awareness” processes running already; it’s just that your corpus colossum normally keeps them sufficiently synchronized that you don’t notice, and you only have a single “I” providing a consistent narrative, so you never notice that you’re actually two separate conscious processes cooperating and competing for goal-satisfaction.
Anyway, hopefully this has been informative enough that dedicated psychonauts can use it as a launching point, while obfuscated enough that people won’t be casually frying their brains. This ain’t rocket science yet.
You linked to the local-jargon version of word-tabooing, but what you describe sounds more like the standard everyday version of “tabooing” something. Which was intended?
… huh. I don’t know about hacking the “I”, all I’ve seen suggested is regular meditation and visualization. Still, interesting stuff for occult buffs.
Also, I think I’ve seen accounts of people creating two or three tulpas (tulpae?), with no indication that this was any different to the fist; does this square with the left-brain/right-brain bit?
EDIT: I just realized I immediately read a comment with WARNING MEMETIC HAZARD at the top. Hum.
Fair point. OK, the fact that it’s reversible seems about as agreed on as any facet of this topic—more so than many of them. I’m inclined to believe this isn’t a hoax or anything due to the sheer number of people claiming to have done it and (apparent?) lack of failed replications. None of this is accepted science or anything, there is a certain degree of risk from Side Effects No-one Saw Coming and hey, maybe it’s magic and your soul will get nommed (although most online proponents are careful to disavow claims that it’s anything but an induced hallucination.)
They ought to be at least somewhat concerned that they have less brain for their own walking around the house.
You don’t know? It’s loss in “utility”. When you have an unknown item which, out of the items that you know of, most closely resembles a mushroom consumption of which had very huge negative utility, the expected utility of consuming the unknown toxic mushroom like item is also negative (unless totally starving and there’s literally nothing else one could seek for nourishment). Of course, in today’s environment, people rarely face the need to make such inferences themselves—society warns you of all the common dangers, uncommon dangers are by definition uncommon, and language hides the inferential nature of categorization from the view.
The cases I’ve heard which do not look like people attention seeking online, are associated with severe mental illness. Of course the direction of the causation is somewhat murky in any such issue, but necessity to see a doctor doesn’t depend on direction of the causation here.
Ah, right. I suppose that would depend on the exact mechanisms, involved, yeah.
Are children who have imaginary friends found to have subnormal cognitive development?
So please provide evidence that this feature is shared by the thing under discussion, yeah?
Source? This doesn’t match my experiences, unless you draw an extremely wide definition of “attention-seeking online” (I assume you meant to imply people who were probably making it up?)