I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.
I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.