Speaking personally, if I can create some part of myself that “believes” that, then yes, absolutely. I actually find a great deal of benefit from learning “magical” / New Age techniques for exactly that reason.
Is this something you can explain? I’m looking into this kind of stuff now and trying to find out the basics so that I can put together a 1) maximally effective and 2) epistemically safe method.
It’s hard to find people into this kind of stuff that even understand the map-territory distinction, so input from other LWers is valued!
I did Tai Chi lessons for a while, and enjoyed the “charging up with chi/The Force” feeling it would give me, from pucturing flows of energy throught the body and such. Of course the “real” cause of those positive feelings are extra blood oxygenation, meditation, clearing extraneous thoughts, etc.
I was OK with this disconnect between the map and the territory, because there was a linkage between them: the deep breathing, mental focusing, and let’s not forget the placebo effect.
I suppose this is not too different in principle to the “mind hacks” bandied about around here.
I’m pretty sure I could explain it, given time, a few false starts, and a patient audience. I’ve been finding that more and more, the English language and US culture sucks as a foundation for trying to explain the processes in my head :)
With that said, here goes Attempt #1 :)
Feel around in your head for a few statements, and compare them. Some of them will feel “factual” like “France exists.” Others will instead be assertions that you support—“killing is wrong”, for example. Finally, you’ll have assertions you don’t support—“God exists in Heaven, and will judge us when we die.”
The former category, “factual” matters, should have a distinctly different feel from the other two “beliefs”. The beliefs you agree with should also have a distinctly different feel from the ones you disagree with. I often find that “beliefs I agree with” feel a lot like “factual” matters, whereas “beliefs I disagree with” have a very distinct feeling.
You’ll probably run in to edge cases, or things that don’t fit any of these categories; those are still interesting thoughts but you probably want to ignore them and focus on these simple, vivid categories. If some other set of groupings has a more distinct “feel” to it, is easier to separate out, feel free to use those. The point is simply to develop a sense of what the ideas in your head feel like, because we tend not to think about that at all.
Next, you need to help yourself hold two perspectives at once: I think Alicorn’s City of Lights from her Luminousity sequence is probably a useful framework here. Divide yourself in to two selves, one who believes something, and one who doesn’t, something like “I should study abroad in Australia” from the shiny story examples :)
Compare how those two parts of you process this, and see how the belief feels differently for each of them. If you can do this at all, then you’ve demonstrated to yourself that you CAN hold two mutually incompatible stances at the same time.
So, now you know what they feel like, and you know that you can hold two at the same time. I find that’s an important framework, because now you can start believing absurd things, with the reassurance that a large part of you will still be perfectly sane, sitting on the sidelines and muttering about how much of a nutter you’re being. (Being comfortable with the part of yourself which believes impossible things, and accepting that it’ll be called a nutter is also helpful :))
The next step is to learn how to play around with the categorization you do. Try to imagine what it feels like when “France exists” is a belief instead of a fact. Remind yourself that you’ve never been to France. Remind yourself that millions of people insist they’ve witnessed God, and this is probably more people than have witnessed France. It doesn’t matter if these points are absurd and irrational, they’re just a useful framework for trying to imagine that France is all a big hoax, just like God is.
(If you believe in God, or don’t believe in France, feel free to substitute appropriately :))
If all three of those steps went well, you should now be able to create a self which believes that France does not exist. Once you’ve done this, believing in your horoscope should be a reasonably trivial exercise.
Alright, that’s Attempt #1. Let me know what was unclear, what didn’t work, and hopefully eventually we’ll have a working method! =)
I wonder if you could use some kind of “gödelian bomb” referencing decision theory that’ll flag it as being currently handled and then crash so that the flag stays up without having to actually handle it. This’ll probably be dangerous in different ways, possibly much more so, but epistemically wouldn’t be one of then I think.
It seems fairly likely that the crash itself would be more unpleasant than what you’re trying to cure with it thou.
I do not understand this. It seems like if I did it would be interesting. Could you explain further? Perhaps just restating that slowly/carefully/formally might help.
The notification will go away when you have “sufficiently addressed” it.
You believe that the decision theory D states that if you want the notification to go away it’d probably be best if it went away.
This is not sufficient, since it’s to direct and similar to “go away because I want you to” that evolution has specifically guarded against.
On the other hand, if you could prove that for this specific instance that the decision theory indeed says the notification should go away, it probably would, since you have high confidence in the decision theory.
Proving somehting like that would be hard and require a lot of creative ideas for every single thing, so it’s not practical.
What might instead be possible is to come up with some sort of algorithm that in actuality isomorphic to the one that got caught in the filter, but long, indirect, and gradual enough that it hacks it’s way past it.
The most likely structure for this is some circular justification that ALMOST says that everything it itself proves is true, but takes in just enough evidence each iteration to keep it from falling into the abyss of inconsistency that has been proved to be.
So it actually infinitely narrowly avoids being a Gödelian bomb, but it looks a lot like it.
You’re not fooling yourself, you’re fooling a small malfunctioning part of your brain. The technique only works if you honestly believe that having the technique on average really IS a good idea and do so in a specific, rigid way.
Is this something you can explain? I’m looking into this kind of stuff now and trying to find out the basics so that I can put together a 1) maximally effective and 2) epistemically safe method.
It’s hard to find people into this kind of stuff that even understand the map-territory distinction, so input from other LWers is valued!
I did Tai Chi lessons for a while, and enjoyed the “charging up with chi/The Force” feeling it would give me, from pucturing flows of energy throught the body and such. Of course the “real” cause of those positive feelings are extra blood oxygenation, meditation, clearing extraneous thoughts, etc.
I was OK with this disconnect between the map and the territory, because there was a linkage between them: the deep breathing, mental focusing, and let’s not forget the placebo effect.
I suppose this is not too different in principle to the “mind hacks” bandied about around here.
I’m pretty sure I could explain it, given time, a few false starts, and a patient audience. I’ve been finding that more and more, the English language and US culture sucks as a foundation for trying to explain the processes in my head :)
With that said, here goes Attempt #1 :)
Feel around in your head for a few statements, and compare them. Some of them will feel “factual” like “France exists.” Others will instead be assertions that you support—“killing is wrong”, for example. Finally, you’ll have assertions you don’t support—“God exists in Heaven, and will judge us when we die.”
The former category, “factual” matters, should have a distinctly different feel from the other two “beliefs”. The beliefs you agree with should also have a distinctly different feel from the ones you disagree with. I often find that “beliefs I agree with” feel a lot like “factual” matters, whereas “beliefs I disagree with” have a very distinct feeling.
You’ll probably run in to edge cases, or things that don’t fit any of these categories; those are still interesting thoughts but you probably want to ignore them and focus on these simple, vivid categories. If some other set of groupings has a more distinct “feel” to it, is easier to separate out, feel free to use those. The point is simply to develop a sense of what the ideas in your head feel like, because we tend not to think about that at all.
Next, you need to help yourself hold two perspectives at once: I think Alicorn’s City of Lights from her Luminousity sequence is probably a useful framework here. Divide yourself in to two selves, one who believes something, and one who doesn’t, something like “I should study abroad in Australia” from the shiny story examples :)
Compare how those two parts of you process this, and see how the belief feels differently for each of them. If you can do this at all, then you’ve demonstrated to yourself that you CAN hold two mutually incompatible stances at the same time.
So, now you know what they feel like, and you know that you can hold two at the same time. I find that’s an important framework, because now you can start believing absurd things, with the reassurance that a large part of you will still be perfectly sane, sitting on the sidelines and muttering about how much of a nutter you’re being. (Being comfortable with the part of yourself which believes impossible things, and accepting that it’ll be called a nutter is also helpful :))
The next step is to learn how to play around with the categorization you do. Try to imagine what it feels like when “France exists” is a belief instead of a fact. Remind yourself that you’ve never been to France. Remind yourself that millions of people insist they’ve witnessed God, and this is probably more people than have witnessed France. It doesn’t matter if these points are absurd and irrational, they’re just a useful framework for trying to imagine that France is all a big hoax, just like God is.
(If you believe in God, or don’t believe in France, feel free to substitute appropriately :))
If all three of those steps went well, you should now be able to create a self which believes that France does not exist. Once you’ve done this, believing in your horoscope should be a reasonably trivial exercise.
Alright, that’s Attempt #1. Let me know what was unclear, what didn’t work, and hopefully eventually we’ll have a working method! =)
I wonder if you could use some kind of “gödelian bomb” referencing decision theory that’ll flag it as being currently handled and then crash so that the flag stays up without having to actually handle it. This’ll probably be dangerous in different ways, possibly much more so, but epistemically wouldn’t be one of then I think.
It seems fairly likely that the crash itself would be more unpleasant than what you’re trying to cure with it thou.
I do not understand this. It seems like if I did it would be interesting. Could you explain further? Perhaps just restating that slowly/carefully/formally might help.
Say you want the X notification to go away.
The notification will go away when you have “sufficiently addressed” it.
You believe that the decision theory D states that if you want the notification to go away it’d probably be best if it went away.
This is not sufficient, since it’s to direct and similar to “go away because I want you to” that evolution has specifically guarded against.
On the other hand, if you could prove that for this specific instance that the decision theory indeed says the notification should go away, it probably would, since you have high confidence in the decision theory.
Proving somehting like that would be hard and require a lot of creative ideas for every single thing, so it’s not practical.
What might instead be possible is to come up with some sort of algorithm that in actuality isomorphic to the one that got caught in the filter, but long, indirect, and gradual enough that it hacks it’s way past it.
The most likely structure for this is some circular justification that ALMOST says that everything it itself proves is true, but takes in just enough evidence each iteration to keep it from falling into the abyss of inconsistency that has been proved to be.
So it actually infinitely narrowly avoids being a Gödelian bomb, but it looks a lot like it.
It may not work if you are aware that you are tricking yourself like that but then again it also may work.
That is certainly a very interesting idea.
You’re not fooling yourself, you’re fooling a small malfunctioning part of your brain. The technique only works if you honestly believe that having the technique on average really IS a good idea and do so in a specific, rigid way.