...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself “Dijkstra would not have liked this”, well, that would be enough immortality for me.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.
...just to be clear on this, you have a persistent hallucination who follows you around and offers you rationality advice and points out fallacies in your thinking?
If I ever go insane, I hope it’s like this.
Would what’s considered a normal sense of self count as a persistent hallucination?
See “free will”.
This is strikingly similar to Epictetus’ version of Stoic meditation whereby you imagine a sage to be following you around throughout the day and critiquing your thought patterns and motives while encouraging you towards greater virtue.
Related:
— Edsger W. Dijkstra
That sounds similar. Though I’m afraid I’ve had difficulty finding anything about this while researching Epictetus.
The hallucination doesn’t have auditory or visual components, but does have a sense of presence component that varies in strength.
Indeed, this style of insanity might beat sanity.
Tulpas, especially as construed in this subthread, remind me of daimones in Walter Jon Williams’ Aristoi. I’ve always thought that having / being able to create such mental entities would be super-cool; but I do worry about detrimental effects on mental health of following the methods described in the tulpa community.
You are obligated by law to phrase those insights in the form “If X is Y, I don’t want to be not-Y.”
From the sound of it it’d seem you can make that happen deliberately, and without the need for going insane. no need for hope.
We also have internet self-reports from people who tried it that they are not insane.
One rarely reads self-reports of insanity.
Yes, their attorney usually reports this on their behalf.
If you’re interested in experimenting...
Well, wait. Is there some way of flagging “potentially damaging information that people who do not understand risk-analysis should NOT have access to” on this site? Because I’d rather not start posting ways to hack your wetware without validating whether my audience can recover from the mental equivalent of a SEGFAULT.
In my position, I should experiment with very few things that might be unsafe over the course of my total lifetime. This will probably not be one of them, unless I see very impressive results from elsewhere.
nod that’s probably the most sensible response.
To help others understand the potential risks, the creation of a ‘tulpa’ appears to involve hacking the way your sense-of-self (what current neuroscience identifies as a function of the right inferior parietal cortex) interacts with your ability to empathize and emulate other people (the so-called mirror neuron / “put yourself in others’ shoes” modules). Failure modes involve symptoms that mimic dissociative identity disorder, social anxiety disorder, and schizophrenia.
I am absolutely fascinated, although given the lack of effect that any sort of meditation, guided visualisation, or community ritual has ever had on me, I doubt I would get anywhere. On the other hand, not being engaged in saving the world and its future, I don’t have quite as much at risk as Eliezer.
A MEMETIC HAZARD warning at the top might be appropriate, as is requested for basilisk discussion.