Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.
Well, what you want to do (just about by definition) is be rational in the instrumental sense.
I put significant terminal utility in believing true things, and believe that epistemic rationality is very important for instrumental rationality. Furthermore, it is the right decision to choose not to self deceive in general because you can’t even know what you’re missing and there is reason to suspect that it is a lot.
For all real world issues, I expect to side with you in that we should just get the truth, but in the Least Convenient World (can we just abbreviate this to LCW?) where getting FAI right was dependent on you believing for a moment that a box that contained a blue ball contained a red one....
Maybe you just meant “I’m not interested in that kind of argument because it is so clearly wrong to not be worth my time”, but it seems to come across as “I don’t care even if it’s true”, and that’s probably where the downvote came from.
This is a confusion based on multiple meanings of “belief”, along the lines of the “does the tree make a sound?” debate. Depending on your definition of belief, the above is either trivial or impossible.
For instrumental purposes, it is possible to act and think as if the box contained a red ball, simply by refraining from thinking anything else. The fact that you were paying attention to it being blue before, or that you will remember it’s really blue afterward, have nothing to do with your “believing” in that moment. “Believe” is a verb—something that you DO, not something that you have.
In common parlance, we think that belief is unified and static—which is why some people here continually make the error of assuming that beliefs have some sort of global update facility. Even if you ignore the separation of propositional and procedural memory, it’s still a mistake to think that one belief relates to another, outside of an active moment of conscious comparison.
In other words, there is a difference between the act of believing something in a particular moment, and what we tend to automatically believe without thinking about it. When we say someone “believes” they’re not good at math, we are simply saying that this thought occurs to them in certain contexts, and they do not question it.
Notice that these two parts are separate: there is a thought that occurs, and then it is believed… i..e, passively accepted, without dispute.
Thus, there is really no such thing as “belief”—only priming-by-memory. The person remembers their previous assessment of not being good at math, and their behavior is then primed. This is functionally identical to unconscious priming, in that it’s the absence of conscious dispute that makes it work. CBT trains people to dispute the thoughts when they come up, and I mostly teach people to reconsolidate the memories behind a particular thought so that the it stops coming up in the first place.