I think “we should be skeptical of our very methods” is a fully general counterargument and “the probability of the conjunction of four things is less than the probability of any one of them” is true but weak, since the conjunction of (only!) four things that it’s worth taking seriously is still worth taking seriously.
Also,
Neither are the motives and trustworthiness of the people who make those claims examined.
Seems just obviously false. They’reexaminedallthetime. (And none of these links are even to your posts!)
Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let’s not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.
Seems just obviously false. They’re examined all the time. (And none of these links are even to your posts!)
You’re right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.
Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can’t justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn’t commit quantum suicide even given a high confidence in MWI being true. Logical implications don’t seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.
I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
...people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.
I think “we should be skeptical of our very methods” is a fully general counterargument and “the probability of the conjunction of four things is less than the probability of any one of them” is true but weak, since the conjunction of (only!) four things that it’s worth taking seriously is still worth taking seriously.
Also,
Seems just obviously false. They’re examined all the time. (And none of these links are even to your posts!)
Yes, the conclusions seem weird. Yes, maybe we should be alarmed by that. But let’s not rationalize the perception of weirdness as arising from technical considerations rather than social intuitions.
You’re right, I have to update my view there. When I started posting here I felt it was differently. It now seems that it has changed somewhat dramatically. I hope this trend continues without becoming itself unwarranted.
Although I disagree somewhat with the rest of your comment. I feel I am often misinterpreted when I say that we should be more careful of some of the extraordinary conclusions here. What I mean is not their weirdness but the scope of the consequences of being wrong about them. I have a very bad feeling about using the implied scope of the conclusions to outweigh their low probability. I feel we should put more weight to the consequences of our conclusions being wrong than being right. I can’t justify this, but an example would be quantum suicide (ignore for the sake of the argument that there are other reasons that it is stupid than the possibility that MWI is wrong). I wouldn’t commit quantum suicide even given a high confidence in MWI being true. Logical implications don’t seem enough in some cases. Maybe I am simply biased, but I have been unable to overcome it yet.
I think your communication would really benefit from having a clear dichotomy between “beliefs about policy” and “beliefs about the world”. All beliefs about optimal policy should be assumed incorrect, e.g. quantum suicide, donating to SIAI, or writing angry letters to physicists who are interested in creating lab universes. Humans go insane when they think about policy, and Less Wrong is not an exception. Your notion of “logical implication” seems to be trying to explain how one might feel justified in deriving political implications, but that totally doesn’t work. I think if you really made this dichotomy explicit, and made explicit that you’re worried about the infinite number of misguided policies that so naturally seem like they must follow true weird beliefs, and not so worried about the weird beliefs in and of themselves, then folk would understand your concerns a lot more easily and more progress could be made on setting up a culture that is more resistant to rampant political ‘decision theoretic’ insanity.
Is thinking about policy entirely avoidable, considering that people occasionally need to settle on a policy or need to decide whether a policy is better complied with or avoided?
One example would be the policy not to talk about politics. Authoritarian regimes usually employ that policy, most just fail to frame it as rationality.
No. But it is significantly more avoidable than commonly thought, and should largely be avoided for the first 3 years of hardcore rationality training. Or so the rules go in my should world.
Drawing a map of the territory is disjunctively impossible, coming up with a halfway sane policy based thereon is conjunctively impossible. Metaphorically.