It’s interesting to note that it probably doesn’t matter whether my analysis of the sources of the conflict is 100% accurate.
Have you pushed the limits of how flimsy they can be? Can you tell yourself in a serious mental tone of voice “My horoscope said I have to pick X” and have it go away?
Can you do the full analysis and have confident answer without getting it to go away?
My quick mental simulations say “yes” to both and that it’s not so much ‘having an explanation’ as it is deliberately flipping the mental “dismiss alarm” button, which you do only when “you” are comfortable enough to.
Can you tell yourself in a serious mental tone of voice “My horoscope said I have to pick X” and have it go away?
Speaking personally, if I can create some part of myself that “believes” that, then yes, absolutely. I actually find a great deal of benefit from learning “magical” / New Age techniques for exactly that reason.
I’ve routinely done things because “my future self” was whispering to my telepathically and telling me that it would all work out, or because my “psychic senses” said it would work.
The rest of me thinks this is crazy, but it works so I let that little part of me continue with it’s very interesting belief system :)
Speaking personally, if I can create some part of myself that “believes” that, then yes, absolutely. I actually find a great deal of benefit from learning “magical” / New Age techniques for exactly that reason.
Is this something you can explain? I’m looking into this kind of stuff now and trying to find out the basics so that I can put together a 1) maximally effective and 2) epistemically safe method.
It’s hard to find people into this kind of stuff that even understand the map-territory distinction, so input from other LWers is valued!
I did Tai Chi lessons for a while, and enjoyed the “charging up with chi/The Force” feeling it would give me, from pucturing flows of energy throught the body and such. Of course the “real” cause of those positive feelings are extra blood oxygenation, meditation, clearing extraneous thoughts, etc.
I was OK with this disconnect between the map and the territory, because there was a linkage between them: the deep breathing, mental focusing, and let’s not forget the placebo effect.
I suppose this is not too different in principle to the “mind hacks” bandied about around here.
I’m pretty sure I could explain it, given time, a few false starts, and a patient audience. I’ve been finding that more and more, the English language and US culture sucks as a foundation for trying to explain the processes in my head :)
With that said, here goes Attempt #1 :)
Feel around in your head for a few statements, and compare them. Some of them will feel “factual” like “France exists.” Others will instead be assertions that you support—“killing is wrong”, for example. Finally, you’ll have assertions you don’t support—“God exists in Heaven, and will judge us when we die.”
The former category, “factual” matters, should have a distinctly different feel from the other two “beliefs”. The beliefs you agree with should also have a distinctly different feel from the ones you disagree with. I often find that “beliefs I agree with” feel a lot like “factual” matters, whereas “beliefs I disagree with” have a very distinct feeling.
You’ll probably run in to edge cases, or things that don’t fit any of these categories; those are still interesting thoughts but you probably want to ignore them and focus on these simple, vivid categories. If some other set of groupings has a more distinct “feel” to it, is easier to separate out, feel free to use those. The point is simply to develop a sense of what the ideas in your head feel like, because we tend not to think about that at all.
Next, you need to help yourself hold two perspectives at once: I think Alicorn’s City of Lights from her Luminousity sequence is probably a useful framework here. Divide yourself in to two selves, one who believes something, and one who doesn’t, something like “I should study abroad in Australia” from the shiny story examples :)
Compare how those two parts of you process this, and see how the belief feels differently for each of them. If you can do this at all, then you’ve demonstrated to yourself that you CAN hold two mutually incompatible stances at the same time.
So, now you know what they feel like, and you know that you can hold two at the same time. I find that’s an important framework, because now you can start believing absurd things, with the reassurance that a large part of you will still be perfectly sane, sitting on the sidelines and muttering about how much of a nutter you’re being. (Being comfortable with the part of yourself which believes impossible things, and accepting that it’ll be called a nutter is also helpful :))
The next step is to learn how to play around with the categorization you do. Try to imagine what it feels like when “France exists” is a belief instead of a fact. Remind yourself that you’ve never been to France. Remind yourself that millions of people insist they’ve witnessed God, and this is probably more people than have witnessed France. It doesn’t matter if these points are absurd and irrational, they’re just a useful framework for trying to imagine that France is all a big hoax, just like God is.
(If you believe in God, or don’t believe in France, feel free to substitute appropriately :))
If all three of those steps went well, you should now be able to create a self which believes that France does not exist. Once you’ve done this, believing in your horoscope should be a reasonably trivial exercise.
Alright, that’s Attempt #1. Let me know what was unclear, what didn’t work, and hopefully eventually we’ll have a working method! =)
I wonder if you could use some kind of “gödelian bomb” referencing decision theory that’ll flag it as being currently handled and then crash so that the flag stays up without having to actually handle it. This’ll probably be dangerous in different ways, possibly much more so, but epistemically wouldn’t be one of then I think.
It seems fairly likely that the crash itself would be more unpleasant than what you’re trying to cure with it thou.
I do not understand this. It seems like if I did it would be interesting. Could you explain further? Perhaps just restating that slowly/carefully/formally might help.
The notification will go away when you have “sufficiently addressed” it.
You believe that the decision theory D states that if you want the notification to go away it’d probably be best if it went away.
This is not sufficient, since it’s to direct and similar to “go away because I want you to” that evolution has specifically guarded against.
On the other hand, if you could prove that for this specific instance that the decision theory indeed says the notification should go away, it probably would, since you have high confidence in the decision theory.
Proving somehting like that would be hard and require a lot of creative ideas for every single thing, so it’s not practical.
What might instead be possible is to come up with some sort of algorithm that in actuality isomorphic to the one that got caught in the filter, but long, indirect, and gradual enough that it hacks it’s way past it.
The most likely structure for this is some circular justification that ALMOST says that everything it itself proves is true, but takes in just enough evidence each iteration to keep it from falling into the abyss of inconsistency that has been proved to be.
So it actually infinitely narrowly avoids being a Gödelian bomb, but it looks a lot like it.
You’re not fooling yourself, you’re fooling a small malfunctioning part of your brain. The technique only works if you honestly believe that having the technique on average really IS a good idea and do so in a specific, rigid way.
deliberately flipping the mental “dismiss alarm” button
This actually sparks another thought: When I was a kid, I got very annoyed with the way my body let me know things. I understood that sometimes it would get hungry, or need to use the bathroom, but sometimes I had to wait before I could viably handle these needs. I thus started viewing these states as “mental alarms”, and eventually managed to “install a dismiss alarm button.” I now refer to it as my internal messaging system: My body sends me a message, and I get a little “unread message” indicator. I’ll suffer until I read the message, but then I have no obligation to actually act on it. If I ignore the message, I usually get another one in an hour or so, since my body still has this need.
At first, a dismissed alarm would last ~5 minutes. Now I can actually dismiss my sense of hunger for a couple days if food just doesn’t come up. Dismissing an alarm when I have easy access to take care of something (for instance, trying to ignore hunger when someone offers me a nice meal) is much, much harder.
It does run in to the failure state that I sometimes forget to do much of anything for hours, because I’m focused on my work and just automatically dismiss all of my alarms. This occasionally results in a couple hours of unproductive work until I pause, evaluate the reason I’m having trouble, and realize I haven’t had anything to eat all day :)
I managed to use a visualization of a “car sickness switch” that helped tremendously, though the switch did keep turning itself on every couple minutes.
It does run in to the failure state that I sometimes forget to do much of anything for hours, because I’m focused on my work and just automatically dismiss all of my alarms.
I need to work on being more explicit with this- that happens to me without the interrupt flag ever being set.
Yesterday at the end of our LW meetup one of the attendee’s was talking about how he hadn’t eaten because he got sucked into the conversation, and we were giving him shit about it since we were meeting in the middle of the food court. I even asked myself I was hungry.. “nah, not really”. As soon as I get home I get the message “you have a serious caloric deficit, eat a 2000kcal meal”
The “dismiss alarm” button doesn’t work as well for pain of either sort—I can temporarily suppress it, but it will keep coming back until I do something to actually resolve it - for physical pain, this is generally pain killers. For emotional pain, some combination of “vegging out” on mindless activities (TV, WOW, etc.).
For mild pain, it’s pretty easy to just dismiss alarm and ignore it. For moderate pain, I usually have to convert it in to something else. This is easier to do with physical pain, where I can tweak the sensation directly. I can induce specific emotional states, but it’s harder and less stable. For intense pain, I’ll usually be unable to function even if I’m doing this, and it will sometimes hit a point where I can’t redirect it.
Long term, persistent pain is also much more exhausting to deal with; this is probably some of why emotional pain is more of an issue for me—it tends to be a lot less fleeting.
Can you tell yourself in a serious mental tone of voice “My horoscope said I have to pick X” and have it go away?
I haven’t tested this, but I’m guessing yes.
Can you do the full analysis and have confident answer without getting it to go away?
Yes. That’s when I usually switch to non-content-focused methods.
My quick mental simulations say “yes” to both and that it’s not so much ‘having an explanation’ as it is deliberately flipping the mental “dismiss alarm” button, which you do only when “you” are comfortable enough to.
Have you pushed the limits of how flimsy they can be? Can you tell yourself in a serious mental tone of voice “My horoscope said I have to pick X” and have it go away?
Can you do the full analysis and have confident answer without getting it to go away?
My quick mental simulations say “yes” to both and that it’s not so much ‘having an explanation’ as it is deliberately flipping the mental “dismiss alarm” button, which you do only when “you” are comfortable enough to.
Speaking personally, if I can create some part of myself that “believes” that, then yes, absolutely. I actually find a great deal of benefit from learning “magical” / New Age techniques for exactly that reason.
I’ve routinely done things because “my future self” was whispering to my telepathically and telling me that it would all work out, or because my “psychic senses” said it would work.
The rest of me thinks this is crazy, but it works so I let that little part of me continue with it’s very interesting belief system :)
Is this something you can explain? I’m looking into this kind of stuff now and trying to find out the basics so that I can put together a 1) maximally effective and 2) epistemically safe method.
It’s hard to find people into this kind of stuff that even understand the map-territory distinction, so input from other LWers is valued!
I did Tai Chi lessons for a while, and enjoyed the “charging up with chi/The Force” feeling it would give me, from pucturing flows of energy throught the body and such. Of course the “real” cause of those positive feelings are extra blood oxygenation, meditation, clearing extraneous thoughts, etc.
I was OK with this disconnect between the map and the territory, because there was a linkage between them: the deep breathing, mental focusing, and let’s not forget the placebo effect.
I suppose this is not too different in principle to the “mind hacks” bandied about around here.
I’m pretty sure I could explain it, given time, a few false starts, and a patient audience. I’ve been finding that more and more, the English language and US culture sucks as a foundation for trying to explain the processes in my head :)
With that said, here goes Attempt #1 :)
Feel around in your head for a few statements, and compare them. Some of them will feel “factual” like “France exists.” Others will instead be assertions that you support—“killing is wrong”, for example. Finally, you’ll have assertions you don’t support—“God exists in Heaven, and will judge us when we die.”
The former category, “factual” matters, should have a distinctly different feel from the other two “beliefs”. The beliefs you agree with should also have a distinctly different feel from the ones you disagree with. I often find that “beliefs I agree with” feel a lot like “factual” matters, whereas “beliefs I disagree with” have a very distinct feeling.
You’ll probably run in to edge cases, or things that don’t fit any of these categories; those are still interesting thoughts but you probably want to ignore them and focus on these simple, vivid categories. If some other set of groupings has a more distinct “feel” to it, is easier to separate out, feel free to use those. The point is simply to develop a sense of what the ideas in your head feel like, because we tend not to think about that at all.
Next, you need to help yourself hold two perspectives at once: I think Alicorn’s City of Lights from her Luminousity sequence is probably a useful framework here. Divide yourself in to two selves, one who believes something, and one who doesn’t, something like “I should study abroad in Australia” from the shiny story examples :)
Compare how those two parts of you process this, and see how the belief feels differently for each of them. If you can do this at all, then you’ve demonstrated to yourself that you CAN hold two mutually incompatible stances at the same time.
So, now you know what they feel like, and you know that you can hold two at the same time. I find that’s an important framework, because now you can start believing absurd things, with the reassurance that a large part of you will still be perfectly sane, sitting on the sidelines and muttering about how much of a nutter you’re being. (Being comfortable with the part of yourself which believes impossible things, and accepting that it’ll be called a nutter is also helpful :))
The next step is to learn how to play around with the categorization you do. Try to imagine what it feels like when “France exists” is a belief instead of a fact. Remind yourself that you’ve never been to France. Remind yourself that millions of people insist they’ve witnessed God, and this is probably more people than have witnessed France. It doesn’t matter if these points are absurd and irrational, they’re just a useful framework for trying to imagine that France is all a big hoax, just like God is.
(If you believe in God, or don’t believe in France, feel free to substitute appropriately :))
If all three of those steps went well, you should now be able to create a self which believes that France does not exist. Once you’ve done this, believing in your horoscope should be a reasonably trivial exercise.
Alright, that’s Attempt #1. Let me know what was unclear, what didn’t work, and hopefully eventually we’ll have a working method! =)
I wonder if you could use some kind of “gödelian bomb” referencing decision theory that’ll flag it as being currently handled and then crash so that the flag stays up without having to actually handle it. This’ll probably be dangerous in different ways, possibly much more so, but epistemically wouldn’t be one of then I think.
It seems fairly likely that the crash itself would be more unpleasant than what you’re trying to cure with it thou.
I do not understand this. It seems like if I did it would be interesting. Could you explain further? Perhaps just restating that slowly/carefully/formally might help.
Say you want the X notification to go away.
The notification will go away when you have “sufficiently addressed” it.
You believe that the decision theory D states that if you want the notification to go away it’d probably be best if it went away.
This is not sufficient, since it’s to direct and similar to “go away because I want you to” that evolution has specifically guarded against.
On the other hand, if you could prove that for this specific instance that the decision theory indeed says the notification should go away, it probably would, since you have high confidence in the decision theory.
Proving somehting like that would be hard and require a lot of creative ideas for every single thing, so it’s not practical.
What might instead be possible is to come up with some sort of algorithm that in actuality isomorphic to the one that got caught in the filter, but long, indirect, and gradual enough that it hacks it’s way past it.
The most likely structure for this is some circular justification that ALMOST says that everything it itself proves is true, but takes in just enough evidence each iteration to keep it from falling into the abyss of inconsistency that has been proved to be.
So it actually infinitely narrowly avoids being a Gödelian bomb, but it looks a lot like it.
It may not work if you are aware that you are tricking yourself like that but then again it also may work.
That is certainly a very interesting idea.
You’re not fooling yourself, you’re fooling a small malfunctioning part of your brain. The technique only works if you honestly believe that having the technique on average really IS a good idea and do so in a specific, rigid way.
This actually sparks another thought: When I was a kid, I got very annoyed with the way my body let me know things. I understood that sometimes it would get hungry, or need to use the bathroom, but sometimes I had to wait before I could viably handle these needs. I thus started viewing these states as “mental alarms”, and eventually managed to “install a dismiss alarm button.” I now refer to it as my internal messaging system: My body sends me a message, and I get a little “unread message” indicator. I’ll suffer until I read the message, but then I have no obligation to actually act on it. If I ignore the message, I usually get another one in an hour or so, since my body still has this need.
At first, a dismissed alarm would last ~5 minutes. Now I can actually dismiss my sense of hunger for a couple days if food just doesn’t come up. Dismissing an alarm when I have easy access to take care of something (for instance, trying to ignore hunger when someone offers me a nice meal) is much, much harder.
It does run in to the failure state that I sometimes forget to do much of anything for hours, because I’m focused on my work and just automatically dismiss all of my alarms. This occasionally results in a couple hours of unproductive work until I pause, evaluate the reason I’m having trouble, and realize I haven’t had anything to eat all day :)
I managed to use a visualization of a “car sickness switch” that helped tremendously, though the switch did keep turning itself on every couple minutes.
I need to work on being more explicit with this- that happens to me without the interrupt flag ever being set.
Yesterday at the end of our LW meetup one of the attendee’s was talking about how he hadn’t eaten because he got sucked into the conversation, and we were giving him shit about it since we were meeting in the middle of the food court. I even asked myself I was hungry.. “nah, not really”. As soon as I get home I get the message “you have a serious caloric deficit, eat a 2000kcal meal”
Can you use this for non-physical signals, such as purely emotional pain?
The “dismiss alarm” button doesn’t work as well for pain of either sort—I can temporarily suppress it, but it will keep coming back until I do something to actually resolve it - for physical pain, this is generally pain killers. For emotional pain, some combination of “vegging out” on mindless activities (TV, WOW, etc.).
For mild pain, it’s pretty easy to just dismiss alarm and ignore it. For moderate pain, I usually have to convert it in to something else. This is easier to do with physical pain, where I can tweak the sensation directly. I can induce specific emotional states, but it’s harder and less stable. For intense pain, I’ll usually be unable to function even if I’m doing this, and it will sometimes hit a point where I can’t redirect it.
Long term, persistent pain is also much more exhausting to deal with; this is probably some of why emotional pain is more of an issue for me—it tends to be a lot less fleeting.
I haven’t tested this, but I’m guessing yes.
Yes. That’s when I usually switch to non-content-focused methods.
That does sound pretty plausible.