Let me try to apply this approach to my views on economic progress.
To do that, I would look at the evidence in favour of economic progress being a moral imperative (e.g. improvements in wellbeing) and against it (development of powerful military technologies), and then make a level-headed assessment that’s proportional to the evidence.
It takes a lot of effort to keep my beliefs proportional to the evidence, but no one said rationality is easy.
I don’t see it. The point of the technique is to defer your own judgement on arguments/claims/facts that live inside of theories indefinitely, giving them slack to grow according to theory’s own perspective (for months and years). Instead of judging individual arguments in their own right, according to your own worldview, you judge them within the theory, from the theory’s weird perspective you occasionally confidently disagree with. Less urgently, you also judge the theory as a whole according to your own worldview. If it’s important/interesting, or has important/interesting competitors, then you keep developing it, even if it doesn’t look promising as a whole (some of its internally-generated arguments will survive its likely demise).
The point that should help with motivated cognition is switching between different paradigms/ideologies that focus on theories that compete with a suspect worldview, so that a germ of truth overlooked in its misguided competitors won’t be indefinitely stunted in its development. On 5-second level, the skill is to maintain a two-level context, with description of the current theory/paradigm/ideology/hypothesis at the top level, and the current topic/argument/claim at the lower level.
There are two modes in which claims of the theory are judged. In the internal mode, you are channeling the theory, taking Ideological Turing Test for it, thinking in character rather than just thinking, considering the theory’s claims according to the theory itself, to change them and develop the theory without affecting their growth with your own perspective and judgement. This gives the arguments courage to speak their mind, not worrying that they are disbelieved or disapproved-of by external-you.
In the external mode, you consider claims made by the theory from your own perspective and treat them as predictions. When multiple theories make predictions about the same claim, and you are externally confident in its truth, that applies a likelihood ratio to external weights of the theories as wholes, according to their relative confidence in the claim. This affects standing of the theories themselves, but shouldn’t affect the standing of the claims within the theories. (The external weights of theories are much less important than details of arguments inside the theories, because there is all sorts of filtering and double counting, and because certain falsity of a theory is not sufficient for discarding it. But it’s useful to maintain some awareness of them.)
Let me try to apply this approach to my views on economic progress.
To do that, I would look at the evidence in favour of economic progress being a moral imperative (e.g. improvements in wellbeing) and against it (development of powerful military technologies), and then make a level-headed assessment that’s proportional to the evidence.
It takes a lot of effort to keep my beliefs proportional to the evidence, but no one said rationality is easy.
I don’t see it. The point of the technique is to defer your own judgement on arguments/claims/facts that live inside of theories indefinitely, giving them slack to grow according to theory’s own perspective (for months and years). Instead of judging individual arguments in their own right, according to your own worldview, you judge them within the theory, from the theory’s weird perspective you occasionally confidently disagree with. Less urgently, you also judge the theory as a whole according to your own worldview. If it’s important/interesting, or has important/interesting competitors, then you keep developing it, even if it doesn’t look promising as a whole (some of its internally-generated arguments will survive its likely demise).
The point that should help with motivated cognition is switching between different paradigms/ideologies that focus on theories that compete with a suspect worldview, so that a germ of truth overlooked in its misguided competitors won’t be indefinitely stunted in its development. On 5-second level, the skill is to maintain a two-level context, with description of the current theory/paradigm/ideology/hypothesis at the top level, and the current topic/argument/claim at the lower level.
There are two modes in which claims of the theory are judged. In the internal mode, you are channeling the theory, taking Ideological Turing Test for it, thinking in character rather than just thinking, considering the theory’s claims according to the theory itself, to change them and develop the theory without affecting their growth with your own perspective and judgement. This gives the arguments courage to speak their mind, not worrying that they are disbelieved or disapproved-of by external-you.
In the external mode, you consider claims made by the theory from your own perspective and treat them as predictions. When multiple theories make predictions about the same claim, and you are externally confident in its truth, that applies a likelihood ratio to external weights of the theories as wholes, according to their relative confidence in the claim. This affects standing of the theories themselves, but shouldn’t affect the standing of the claims within the theories. (The external weights of theories are much less important than details of arguments inside the theories, because there is all sorts of filtering and double counting, and because certain falsity of a theory is not sufficient for discarding it. But it’s useful to maintain some awareness of them.)