Do you notice your beliefs changing overtime to match whatever is most self-serving? I know that some of you enlightened LessWrong folks have already overcome your biases and biological propensities, but I notice that I haven’t.
Four years ago, I was a poor university student struggling to make ends meet. I didn’t have a high paying job lined up at the time, and I was very uncertain about the future. My beliefs were somewhat anti-big-business and anti-economic-growth.
However, now that I have a decent job, which I’m performing well at, my views have shifted towards pro-economic-growth. I notice myself finding Tyler Cowen’s argument that economic growth is a moral imperative quite compelling because it justifies my current context.
The general strategy is to have fewer beliefs, to allow development of detailed ideas/hypotheses/theories without giving much attention to evaluation of their external correctness (such as presence of their instances in the real world, or them making sense in other paradigms you understand), instead focusing on their internal correctness (validity of arguments inside the idea, from the point of view of a paradigm/mindset native to the idea). Then you only suffer motivated attention, and it’s easier to counter by making sure to pay some attention to developing understanding of alternatives.
The results look pretty weird though, for example I imagine that there might be a vague impression from my writings that I’m changing my mind on topics that I’ve thought about for years back and forth on months-long scale, or believe contradictory things at the same time, or forget some fundamental well-known things (paradigms can insist on failing to understand/notice some established facts, especially when their well-being depends on it). I’m not sure how to communicate transient interest in an obscure idea without it coming across as a resurgent belief in it (with a touch of amnesia to other things), and I keep using belief-words to misleadingly describe beliefs that only hold inside an idea/hypothesis/theory too. (This is not something language has established conventions for.)
Let me try to apply this approach to my views on economic progress.
To do that, I would look at the evidence in favour of economic progress being a moral imperative (e.g. improvements in wellbeing) and against it (development of powerful military technologies), and then make a level-headed assessment that’s proportional to the evidence.
It takes a lot of effort to keep my beliefs proportional to the evidence, but no one said rationality is easy.
I don’t see it. The point of the technique is to defer your own judgement on arguments/claims/facts that live inside of theories indefinitely, giving them slack to grow according to theory’s own perspective (for months and years). Instead of judging individual arguments in their own right, according to your own worldview, you judge them within the theory, from the theory’s weird perspective you occasionally confidently disagree with. Less urgently, you also judge the theory as a whole according to your own worldview. If it’s important/interesting, or has important/interesting competitors, then you keep developing it, even if it doesn’t look promising as a whole (some of its internally-generated arguments will survive its likely demise).
The point that should help with motivated cognition is switching between different paradigms/ideologies that focus on theories that compete with a suspect worldview, so that a germ of truth overlooked in its misguided competitors won’t be indefinitely stunted in its development. On 5-second level, the skill is to maintain a two-level context, with description of the current theory/paradigm/ideology/hypothesis at the top level, and the current topic/argument/claim at the lower level.
There are two modes in which claims of the theory are judged. In the internal mode, you are channeling the theory, taking Ideological Turing Test for it, thinking in character rather than just thinking, considering the theory’s claims according to the theory itself, to change them and develop the theory without affecting their growth with your own perspective and judgement. This gives the arguments courage to speak their mind, not worrying that they are disbelieved or disapproved-of by external-you.
In the external mode, you consider claims made by the theory from your own perspective and treat them as predictions. When multiple theories make predictions about the same claim, and you are externally confident in its truth, that applies a likelihood ratio to external weights of the theories as wholes, according to their relative confidence in the claim. This affects standing of the theories themselves, but shouldn’t affect the standing of the claims within the theories. (The external weights of theories are much less important than details of arguments inside the theories, because there is all sorts of filtering and double counting, and because certain falsity of a theory is not sufficient for discarding it. But it’s useful to maintain some awareness of them.)
Do you notice your beliefs changing overtime to match whatever is most self-serving? I know that some of you enlightened LessWrong folks have already overcome your biases and biological propensities, but I notice that I haven’t.
Four years ago, I was a poor university student struggling to make ends meet. I didn’t have a high paying job lined up at the time, and I was very uncertain about the future. My beliefs were somewhat anti-big-business and anti-economic-growth.
However, now that I have a decent job, which I’m performing well at, my views have shifted towards pro-economic-growth. I notice myself finding Tyler Cowen’s argument that economic growth is a moral imperative quite compelling because it justifies my current context.
The general strategy is to have fewer beliefs, to allow development of detailed ideas/hypotheses/theories without giving much attention to evaluation of their external correctness (such as presence of their instances in the real world, or them making sense in other paradigms you understand), instead focusing on their internal correctness (validity of arguments inside the idea, from the point of view of a paradigm/mindset native to the idea). Then you only suffer motivated attention, and it’s easier to counter by making sure to pay some attention to developing understanding of alternatives.
The results look pretty weird though, for example I imagine that there might be a vague impression from my writings that I’m changing my mind on topics that I’ve thought about for years back and forth on months-long scale, or believe contradictory things at the same time, or forget some fundamental well-known things (paradigms can insist on failing to understand/notice some established facts, especially when their well-being depends on it). I’m not sure how to communicate transient interest in an obscure idea without it coming across as a resurgent belief in it (with a touch of amnesia to other things), and I keep using belief-words to misleadingly describe beliefs that only hold inside an idea/hypothesis/theory too. (This is not something language has established conventions for.)
Let me try to apply this approach to my views on economic progress.
To do that, I would look at the evidence in favour of economic progress being a moral imperative (e.g. improvements in wellbeing) and against it (development of powerful military technologies), and then make a level-headed assessment that’s proportional to the evidence.
It takes a lot of effort to keep my beliefs proportional to the evidence, but no one said rationality is easy.
I don’t see it. The point of the technique is to defer your own judgement on arguments/claims/facts that live inside of theories indefinitely, giving them slack to grow according to theory’s own perspective (for months and years). Instead of judging individual arguments in their own right, according to your own worldview, you judge them within the theory, from the theory’s weird perspective you occasionally confidently disagree with. Less urgently, you also judge the theory as a whole according to your own worldview. If it’s important/interesting, or has important/interesting competitors, then you keep developing it, even if it doesn’t look promising as a whole (some of its internally-generated arguments will survive its likely demise).
The point that should help with motivated cognition is switching between different paradigms/ideologies that focus on theories that compete with a suspect worldview, so that a germ of truth overlooked in its misguided competitors won’t be indefinitely stunted in its development. On 5-second level, the skill is to maintain a two-level context, with description of the current theory/paradigm/ideology/hypothesis at the top level, and the current topic/argument/claim at the lower level.
There are two modes in which claims of the theory are judged. In the internal mode, you are channeling the theory, taking Ideological Turing Test for it, thinking in character rather than just thinking, considering the theory’s claims according to the theory itself, to change them and develop the theory without affecting their growth with your own perspective and judgement. This gives the arguments courage to speak their mind, not worrying that they are disbelieved or disapproved-of by external-you.
In the external mode, you consider claims made by the theory from your own perspective and treat them as predictions. When multiple theories make predictions about the same claim, and you are externally confident in its truth, that applies a likelihood ratio to external weights of the theories as wholes, according to their relative confidence in the claim. This affects standing of the theories themselves, but shouldn’t affect the standing of the claims within the theories. (The external weights of theories are much less important than details of arguments inside the theories, because there is all sorts of filtering and double counting, and because certain falsity of a theory is not sufficient for discarding it. But it’s useful to maintain some awareness of them.)