Assuming you don’t spend all your time in some rationalist enclave, then it’s still useful to understand false beliefs and other biases. When communicating with others, it’s good to see when they try to convince you with a false belief, or when they are trying to convince another person with a false belief.
Also I admit I recently used a false belief when trying to explain how our corporate organization works to a junior colleague. It’s just complex… In my defense, we did briefly brainstorm how to find out how it works.
Assuming you don’t spend all your time in some rationalist enclave, then it’s still useful to understand false beliefs and other biases. When communicating with others, it’s good to see when they try to convince you with a false belief, or when they are trying to convince another person with a false belief.
Not that I’m disagreeing—in practice I have mixed feelings about this—but can you elaborate as to why you think it’s useful? For the purpose of understanding what it is they are saying? For the purpose of trying to convince them? For the latter, I think it is usually pretty futile to try to change someones mind when they have a false belief.
Also I admit I recently used a false belief when trying to explain how our corporate organization works to a junior colleague. It’s just complex… In my defense, we did briefly brainstorm how to find out how it works.
I’m not seeing how that would be a false belief. If you told me that an organization is complex I would make different predictions than if you told me that it was not complex. It seems like the issue is more that “it’s just complex” is an incomplete explanation, not a fake/improper one.
Is it not useful to avoid the acceptance of false beliefs? To intercept these false beliefs before they can latch on to your mind or the mind of another. In this sense you should practice spotting false beliefs untill it becomes reflexive.
If you are at risk of having fake beliefs latch on to you, then I agree that it is useful to learn about them in order to prevent them from latching on to you. However, I question whether it is common to be at risk of such a thing happening because I can’t think of practical examples of fake beliefs happening to non-rationalists, let alone to rationalists (the example you gave doesn’t seem like a fake belief). The examples of fake beliefs used in Map and Territory seem contrived.
In a way it reminds me of decision theory. My understanding is that expected utility maximization works really well in real life and stuff like Timeless Decision Theory is only needed for contrived examples like Newcomb’s problem.
Assuming you don’t spend all your time in some rationalist enclave, then it’s still useful to understand false beliefs and other biases. When communicating with others, it’s good to see when they try to convince you with a false belief, or when they are trying to convince another person with a false belief.
Also I admit I recently used a false belief when trying to explain how our corporate organization works to a junior colleague. It’s just complex… In my defense, we did briefly brainstorm how to find out how it works.
Not that I’m disagreeing—in practice I have mixed feelings about this—but can you elaborate as to why you think it’s useful? For the purpose of understanding what it is they are saying? For the purpose of trying to convince them? For the latter, I think it is usually pretty futile to try to change someones mind when they have a false belief.
I’m not seeing how that would be a false belief. If you told me that an organization is complex I would make different predictions than if you told me that it was not complex. It seems like the issue is more that “it’s just complex” is an incomplete explanation, not a fake/improper one.
Is it not useful to avoid the acceptance of false beliefs? To intercept these false beliefs before they can latch on to your mind or the mind of another. In this sense you should practice spotting false beliefs untill it becomes reflexive.
If you are at risk of having fake beliefs latch on to you, then I agree that it is useful to learn about them in order to prevent them from latching on to you. However, I question whether it is common to be at risk of such a thing happening because I can’t think of practical examples of fake beliefs happening to non-rationalists, let alone to rationalists (the example you gave doesn’t seem like a fake belief). The examples of fake beliefs used in Map and Territory seem contrived.
In a way it reminds me of decision theory. My understanding is that expected utility maximization works really well in real life and stuff like Timeless Decision Theory is only needed for contrived examples like Newcomb’s problem.