One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they’re the sort of actions I’d take if I “truly” believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don’t know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I’ve used this for things like “animal suffering is actually bad”, “FAI is actually important”, and “I actually need to practice to write good UIs”.
This is similar to my experience. Perhaps a better way to express my problem is this. What are the some safe and effective way to construct and dismantle identity? And what sorts of identity are most able to incorporate new information and process them into rational beliefs? One strategy I have used in the past is to simply not claim ownership of any belief so that I might release it more easily but in this I run into a lack of motivation when I try to act on these beliefs. On the other hand if I define my identity based on a set of beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive foundation that motivates me but is not painfully threatened by counter evidence?
The litany of Tarski and the litany of Gendlin exemplify a pretty good attitude to cultivate. (Check out the posts linked in the Litany of Gendlin wiki article; they’re quite relevant too. After that, the sequence on How to Actually Change Your Mind contains still more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to emphasize that it’s OK and normal to have trouble with this, that you don’t have to get everything right on the first try (and to watch out if you think you do), and that eventually the world will start making sense again and you’ll see it was well worth the struggle.
One technique I use to internalize certain beliefs is to determine their implied actions, then take those actions while noting that they’re the sort of actions I’d take if I “truly” believed. Over time the belief becomes internal and not something I have to recompute every time a related decision comes up. I don’t know precisely why this works but my theory is that it has to do with what I perceive my identity to be. Often this process exposes other actions I take which are not in line with the belief. I’ve used this for things like “animal suffering is actually bad”, “FAI is actually important”, and “I actually need to practice to write good UIs”.
This is similar to my experience. Perhaps a better way to express my problem is this. What are the some safe and effective way to construct and dismantle identity? And what sorts of identity are most able to incorporate new information and process them into rational beliefs? One strategy I have used in the past is to simply not claim ownership of any belief so that I might release it more easily but in this I run into a lack of motivation when I try to act on these beliefs. On the other hand if I define my identity based on a set of beliefs then any threat to them is extremely painful.
That was my original question, how can I build an identity or cognitive foundation that motivates me but is not painfully threatened by counter evidence?
The litany of Tarski and the litany of Gendlin exemplify a pretty good attitude to cultivate. (Check out the posts linked in the Litany of Gendlin wiki article; they’re quite relevant too. After that, the sequence on How to Actually Change Your Mind contains still more helpful analysis and advice.)
This can be one of the toughest hurdles for aspiring rationalists. I want to emphasize that it’s OK and normal to have trouble with this, that you don’t have to get everything right on the first try (and to watch out if you think you do), and that eventually the world will start making sense again and you’ll see it was well worth the struggle.