I’m surprise no one responded to this in 14 years [edit, I think the Hanson and Eliezer thread below addresses it well]. I think I agree with the post that explicit self-deception doesn’t work, but automatic self-deception via default selfish attention rationing happens all the time. Similarly, people can choose to be biased even if they can’t directly choose beliefs, because it is necessary to have simplifying algorithms to think at all. A common example would be that all the logical razors people use are also biases, and you can explicitly choose to not reply on the razor and keep thinking.
I think this is one of the sort of things one can find if they do go looking for cases of accidental self-deception, and people not doing this can put people in a mental trap where they think their beliefs are rational to an unjustified degree.
Science Dogood
Karma: 1
I feel like I would live on the internet if I had a successful version of the PMTMYLW business model, haha.
On a more serious note, one of the most important arts of epistemically valuable writing is finding a way to communicate your meaning densely without leaving any room for misinterpretation. Propositions which aren’t obvious to everyone, and which some interpret as super weapons or some other kind of false but locally advantageous belief infrastructure will naturally attract criticism.
This is an inherently difficult task, but not writing spaghetti code posts in the first place can prevent a lot of debugging and vulnerability patching later as people attack one’s posts both for good and bad faith reasons.
The incentive problem here is that spaghetti code posts with vulnerabilities drive disagreement, and disagreement is a form of engagement, and thus bad writing is incentivized by social media for gaining an audience and shaping the Overton Window of discourse.