So there seems to be this conceptual cluster of rationality techniques that revolve around facing the truth, even when it’s hard to face. This seems especially useful for those icky situations where your beliefs have some sort of incentive to not correspond to reality.
Examples:
You don’t want to clean out your fridge because if you had to look in there, then part of you feels like it would make the rotting food at the back more ‘real’. (But in reality, your awareness of the food is independent of its existence, and if you don’t clean it out, it’ll only get worse.)
You don’t want to get your homework done because it’s boring/painful to think about, and if you don’t do it, then you don’t have to think about it, which basically means it’s not really there. (But in reality, this only pushes it closer to the deadline.)
You plan to finish your project in 30 minutes even though it took you 1 hr last time, because part of you thinks that if you write down ‘1 hr’, it’ll really take you that long. But you really need it to be done in 30 minutes, so you write that down instead. (But in reality, you need to decouple your estimates from wishes to get well-calibrated. Your prediction is largely independent of your performance.)
And on and on. These sorts of problems often comprise ugh fields, feel painful to think about, and are often sources of aversion.
To debug these sorts of problems, there are several (in my opinion) conceptual variants of harnessing epistemological rationality. These techniques often focus on trying to get to the root of the aversion and also calibrate your gut-level senses with the idea that your belief about a matter doesn’t actually control reality.
Mundanification is just another one of these variants that’s about being able to peek into those dark “no, I must never look in here!” corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).
Mundanification is just another one of these variants that’s about being able to peek into those dark “no, I must never look in here!” corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).
How does it work specifically? I can’t see the technique posted anywhere.
During the workshop, it wasn’t well fleshed out (it was a short “flash class”), so I’m afraid I don’t have too many details.
Here are some pieces of the thing, though, and hopefully it helps point at the general idea. The class of techniques is about:
1) Being able to notice when you feel aversive / scared / painful with regards to something in your head.
2) Feel okay with looking into these areas, unpacking them, asking yourself the question of, “What is it, exactly, about this situation that’s causing me distress?”
3) Also, being able to explicate worst-case scenarios, being able to be okay with answering, in some detail, the question, “What’s the worst thing that can really happen?”
Many rationalists have an icky situation where their beliefs have a certain particular incentive not to correspond to reality: when they consider such situations, they would prefer not to consider how much truth there is in the claim that they are better off avoiding such topics. For example, in each of the above claims, you imply that the ugh field is based completely on falsehood. In reality, however, there is a good deal of truth in it in each case:
“But in reality, your awareness of the food is independent of its existence.” The badness of the food for you does partly depend on your awareness of it. There is plenty of food rotting in dumps all over the world, and this does not affect any of us. So the rotten food will indeed be worse for you in some ways if you clean the fridge.
“But in reality, this only pushes it closer to the deadline.” Again, you find it boring and painful to work on your homework. If you push it very close to the deadline, but then work on it because you have to, you will minimize the time spent on it, thus minimizing your pain.
“Your prediction is largely independent of your performance.” This is frequently just false; if you plan on 1 hour, you are likely to take 2 hours, while if you plan on 30 minutes again, you are likely to take 1 hour again.
I wonder if my examples may have just been bad. Do you agree about my general point about flinch-y type topics being hard to debug and Litany of Gendlin-style things to be useful for doing so?
EX:
In the food example, if you don’t know about rotting food, it’ll become more unpleasant to take out later on.
The homework example may actually be not as good. But note that if you do homework early, you save future You any more anguish thinking about how it’s undone.
For the planning thing, I think that I disagree with you. The literature on planning has some minor studies showing that estimation time does in part slightly positively affect performance (hence my use of “largely”), but I think there are far more severe consequences that can arise when your predictions are miscalibrated. (e.g. making promises you can’t keep, getting overloaded, etc.)
My general point is not that, all things considered, it is better in those particular cases to flinch away. I am saying that flinching has both costs and benefits, not only costs, and consequently there may be particular cases when you are better off flinching away.
Sure!
So there seems to be this conceptual cluster of rationality techniques that revolve around facing the truth, even when it’s hard to face. This seems especially useful for those icky situations where your beliefs have some sort of incentive to not correspond to reality.
Examples:
You don’t want to clean out your fridge because if you had to look in there, then part of you feels like it would make the rotting food at the back more ‘real’. (But in reality, your awareness of the food is independent of its existence, and if you don’t clean it out, it’ll only get worse.)
You don’t want to get your homework done because it’s boring/painful to think about, and if you don’t do it, then you don’t have to think about it, which basically means it’s not really there. (But in reality, this only pushes it closer to the deadline.)
You plan to finish your project in 30 minutes even though it took you 1 hr last time, because part of you thinks that if you write down ‘1 hr’, it’ll really take you that long. But you really need it to be done in 30 minutes, so you write that down instead. (But in reality, you need to decouple your estimates from wishes to get well-calibrated. Your prediction is largely independent of your performance.)
And on and on. These sorts of problems often comprise ugh fields, feel painful to think about, and are often sources of aversion.
To debug these sorts of problems, there are several (in my opinion) conceptual variants of harnessing epistemological rationality. These techniques often focus on trying to get to the root of the aversion and also calibrate your gut-level senses with the idea that your belief about a matter doesn’t actually control reality.
Mundanification is just another one of these variants that’s about being able to peek into those dark “no, I must never look in here!” corners of your mind and trying to actually state the worst-case scenario (which is often black-boxed as a Terrible Thing that is Never Opened).
How does it work specifically? I can’t see the technique posted anywhere.
During the workshop, it wasn’t well fleshed out (it was a short “flash class”), so I’m afraid I don’t have too many details.
Here are some pieces of the thing, though, and hopefully it helps point at the general idea. The class of techniques is about:
1) Being able to notice when you feel aversive / scared / painful with regards to something in your head.
2) Feel okay with looking into these areas, unpacking them, asking yourself the question of, “What is it, exactly, about this situation that’s causing me distress?”
3) Also, being able to explicate worst-case scenarios, being able to be okay with answering, in some detail, the question, “What’s the worst thing that can really happen?”
Many rationalists have an icky situation where their beliefs have a certain particular incentive not to correspond to reality: when they consider such situations, they would prefer not to consider how much truth there is in the claim that they are better off avoiding such topics. For example, in each of the above claims, you imply that the ugh field is based completely on falsehood. In reality, however, there is a good deal of truth in it in each case:
“But in reality, your awareness of the food is independent of its existence.” The badness of the food for you does partly depend on your awareness of it. There is plenty of food rotting in dumps all over the world, and this does not affect any of us. So the rotten food will indeed be worse for you in some ways if you clean the fridge.
“But in reality, this only pushes it closer to the deadline.” Again, you find it boring and painful to work on your homework. If you push it very close to the deadline, but then work on it because you have to, you will minimize the time spent on it, thus minimizing your pain.
“Your prediction is largely independent of your performance.” This is frequently just false; if you plan on 1 hour, you are likely to take 2 hours, while if you plan on 30 minutes again, you are likely to take 1 hour again.
I wonder if my examples may have just been bad. Do you agree about my general point about flinch-y type topics being hard to debug and Litany of Gendlin-style things to be useful for doing so?
EX:
In the food example, if you don’t know about rotting food, it’ll become more unpleasant to take out later on.
The homework example may actually be not as good. But note that if you do homework early, you save future You any more anguish thinking about how it’s undone.
For the planning thing, I think that I disagree with you. The literature on planning has some minor studies showing that estimation time does in part slightly positively affect performance (hence my use of “largely”), but I think there are far more severe consequences that can arise when your predictions are miscalibrated. (e.g. making promises you can’t keep, getting overloaded, etc.)
My general point is not that, all things considered, it is better in those particular cases to flinch away. I am saying that flinching has both costs and benefits, not only costs, and consequently there may be particular cases when you are better off flinching away.