As someone with a tulpa, I figure I should probably share my experiences. Vigil has been around since I was 11 or 12, so I can’t effectively compare my abilities before and after he showed up.
He has dedicated himself to improving our rationality, and has been a substantial help in pointing out fallacies in my thinking. However, we’re skeptical that this is anything a more traditional inner monologue wouldn’t figure out. The biggest apparent benefit is that being a tulpa allows him a greater degree of mental flexibility than me, making it easier for him to point out and avoid motivated thinking. Unfortunately, we haven’t found a way to test this.
I’m afraid he doesn’t know any “tricks” like accessing subconscious thoughts or super math skills.
While Vigil has been around for over a decade, I only found out about the tulpa community very recently, so I know very little about it. I also don’t know anything about creating them intentionally, he just showed up one day.
If you have any questions for me or him, we’re happy to answer.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself “Dijkstra would not have liked this”, well, that would be enough immortality for me.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.
That’s a good idea, thanks.
Note that my host’s posting has significant input from me, so this account is only likely to be used for disagreements and things addressed specifically to me.
...many people argue for (their) god by pointing out that they are often “feeling his presence” and since many claim to speak with him as well, maybe that’s really just one form of tupla without the insight that it is actually a hallucination.
Surely that’s not how most people experience belief, but I never really considered that some of them might actually carry around a vivid invisible (or visible for all I know) hallucination quite like that. Could explain why some of the really batshit crazy ones going on about how god constantly speaks to them manage to be quite so convincing.
From now on my two tulpa buddies will be Eliezer and an artificial intelligence engaged in constant conversation while I make toast, love, and take a shower. Too bad they’ll never be smarter than me though.
I’ve had paracosms since before he was around, and we go to those sometimes. I’ve also got a “peaceful place” that I use to collect myself, but I use it much more than he does.
As someone with a tulpa, I figure I should probably share my experiences. Vigil has been around since I was 11 or 12, so I can’t effectively compare my abilities before and after he showed up.
He has dedicated himself to improving our rationality, and has been a substantial help in pointing out fallacies in my thinking. However, we’re skeptical that this is anything a more traditional inner monologue wouldn’t figure out. The biggest apparent benefit is that being a tulpa allows him a greater degree of mental flexibility than me, making it easier for him to point out and avoid motivated thinking. Unfortunately, we haven’t found a way to test this.
I’m afraid he doesn’t know any “tricks” like accessing subconscious thoughts or super math skills.
While Vigil has been around for over a decade, I only found out about the tulpa community very recently, so I know very little about it. I also don’t know anything about creating them intentionally, he just showed up one day.
If you have any questions for me or him, we’re happy to answer.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
If I ever go insane, I hope it’s like this.
Would what’s considered a normal sense of self count as a persistent hallucination?
See “free will”.
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
Related:
— Edsger W. Dijkstra
That sounds similar. Though I’m afraid I’ve had difficulty finding anything about this while researching Epictetus.
The hallucination doesn’t have auditory or visual components, but does have a sense of presence component that varies in strength.
Indeed, this style of insanity might beat sanity.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
You are obligated by law to phrase those insights in the form “If X is Y, I don’t want to be not-Y.”
From the sound of it it’d seem you can make that happen deliberately, and without the need for going insane. no need for hope.
We also have internet self-reports from people who tried it that they are not insane.
One rarely reads self-reports of insanity.
Yes, their attorney usually reports this on their behalf.
If you’re interested in experimenting...
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
nod that’s probably the most sensible response.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.
Would Vigil want to post under his own nick? If so, better register it while still available.
That’s a good idea, thanks. Note that my host’s posting has significant input from me, so this account is only likely to be used for disagreements and things addressed specifically to me.
...many people argue for (their) god by pointing out that they are often “feeling his presence” and since many claim to speak with him as well, maybe that’s really just one form of tupla without the insight that it is actually a hallucination.
Surely that’s not how most people experience belief, but I never really considered that some of them might actually carry around a vivid invisible (or visible for all I know) hallucination quite like that. Could explain why some of the really batshit crazy ones going on about how god constantly speaks to them manage to be quite so convincing.
From now on my two tulpa buddies will be Eliezer and an artificial intelligence engaged in constant conversation while I make toast, love, and take a shower. Too bad they’ll never be smarter than me though.
Is there a headspace, as well?
I’ve had paracosms since before he was around, and we go to those sometimes. I’ve also got a “peaceful place” that I use to collect myself, but I use it much more than he does.