This seems pretty similar to the irrationality game. That’s not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone’s position flatly stated doesn’t usually improve the accuracy of my beliefs.
Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy “contrarian” views that are in fact pretty widespread already.
Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let’s reward that.
Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).
Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.
Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil’s advocate be allowed, i.e. presenting novel arguments for unpopular positions you don’t actually agree with, and in under what circumstances (are disclaimers required, etc.)
You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a “contrarian open thread” that operates under those rules.
Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone’s position flatly stated doesn’t usually improve the accuracy of my beliefs.
This has the problem that beliefs with a large inferential distance won’t get stated.
The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.
This has the problem that beliefs with a large inferential distance won’t get stated.
Is it useful to have beliefs with a large inferential distance stated without supporting evidence? Given that the inferential distance is large, I’m not going to be able to figure it out on my own am I? At least having a sketch of an argument would be useful. The more you fill in the argument, the more minds you change and the more upvotes you get.
The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.
“Upvote if the comment caused you to change your mind” is not the same thing as “upvote if you disagree”.
Another idea, which kinda seems to be getting adopted in this thread already: have a short note at the bottom of every comment right above the vote buttons reminding people of the voting behavior for the thread, to counteract instinctive voting.
This seems pretty similar to the irrationality game. That’s not necessarily a bad thing, but personally I would try the following formula next time (perhaps this should be a regular thread?):
Ask people to defend their contrarian views rather than just flatly stating them. The idea here is to improve the accuracy of our collective beliefs, not just practice nonconformism (although that may also be valuable). Just hearing someone’s position flatly stated doesn’t usually improve the accuracy of my beliefs.
Ask people to avoid upvoting views they already agree with. This is to prevent the thread from becoming an echo chamber of edgy “contrarian” views that are in fact pretty widespread already.
Ask people to vote up only those comments that cause them to update or change their mind on some topic. Increased belief accuracy is what we want; let’s reward that.
Ask people to downvote spam and trolling only. Through this restriction on the use of downvotes, we lessen the anticipated social punishment for sharing an unpopular view that turns out to be incorrect (which is important counterfactually).
Encourage people to make contrarian factual statements rather than contrarian value statements. If we believe different things about the world, we have a better chance of having a productive discussion than if we value different things in the world.
Not sure if these rules should apply to top-level comments only or every comment in the thread. Another interesting question: should playing devil’s advocate be allowed, i.e. presenting novel arguments for unpopular positions you don’t actually agree with, and in under what circumstances (are disclaimers required, etc.)
You could think of my proposed rules as being about halfway between irrationality game and a normal LW open thread. Perhaps by doing binary search, we can figure out what the optimal degree to facilitate contrarianism is, and even make every Nth open thread a “contrarian open thread” that operates under those rules.
Another interesting way to do contrarian threads might be to pick particular views that seem popular on Less Wrong and try to think of the best arguments we can for why they might be incorrect. Kind of like a collective hypothetical apostasy. The advantage of this is that we generate potentially valuable contrarian positions no one is holding yet.
This has the problem that beliefs with a large inferential distance won’t get stated.
The rest of your points seem to boil down to the old irrationality game rule of downvote if you agree, upvote if you disagree.
Is it useful to have beliefs with a large inferential distance stated without supporting evidence? Given that the inferential distance is large, I’m not going to be able to figure it out on my own am I? At least having a sketch of an argument would be useful. The more you fill in the argument, the more minds you change and the more upvotes you get.
“Upvote if the comment caused you to change your mind” is not the same thing as “upvote if you disagree”.
Another idea, which kinda seems to be getting adopted in this thread already: have a short note at the bottom of every comment right above the vote buttons reminding people of the voting behavior for the thread, to counteract instinctive voting.