For complex topics on which I do not have deep knowledge E.G. AI Alignment, I find my opinion is easily swayed by any sufficiently well-written, plausible-sounding argument. And so I recognize that I do not have the necessary knowledge and perspective to add value to the discussion and I purposefully avoid making any confident claims on the subject until if and when I decide to dedicate significant effort to closing the inferential distance.
When I was young I used to read pseudohistory books; Immanuel Velikovsky’s Ages in Chaos is a good example of the best this genre has to offer. I read it and it seemed so obviously correct, so perfect, that I could barely bring myself to bother to search out rebuttals.
And then I read the rebuttals, and they were so obviously correct, so devastating, that I couldn’t believe I had ever been so dumb as to believe Velikovsky.
And then I read the rebuttals to the rebuttals, and they were so obviously correct that I felt silly for ever doubting.
And so on for several more iterations, until the labyrinth of doubt seemed inescapable. What finally broke me out wasn’t so much the lucidity of the consensus view so much as starting to sample different crackpots. Some were almost as bright and rhetorically gifted as Velikovsky, all presented insurmountable evidence for their theories, and all had mutually exclusive ideas. After all, Noah’s Flood couldn’t have been a cultural memory both of the fall of Atlantis and of a change in the Earth’s orbit, let alone of a lost Ice Age civilization or of megatsunamis from a meteor strike. So given that at least some of those arguments are wrong and all seemed practically proven, I am obviously just gullible in the field of ancient history. Given a total lack of independent intellectual steering power and no desire to spend thirty years building an independent knowledge base of Near Eastern history, I choose to just accept the ideas of the prestigious people with professorships in Archaeology, rather than those of the universally reviled crackpots who write books about Venus being a comet.
You could consider this a form of epistemic learned helplessness, where I know any attempt to evaluate the arguments is just going to be a bad idea so I don’t even try. If you have a good argument that the Early Bronze Age worked completely differently from the way mainstream historians believe, I just don’t want to hear about it. If you insist on telling me anyway, I will nod, say that your argument makes complete sense, and then totally refuse to change my mind or admit even the slightest possibility that you might be right.
(This is the correct Bayesian action: if I know that a false argument sounds just as convincing as a true argument, argument convincingness provides no evidence either way. I should ignore it and stick with my prior.)
For complex topics on which I do not have deep knowledge E.G. AI Alignment, I find my opinion is easily swayed by any sufficiently well-written, plausible-sounding argument. And so I recognize that I do not have the necessary knowledge and perspective to add value to the discussion and I purposefully avoid making any confident claims on the subject until if and when I decide to dedicate significant effort to closing the inferential distance.
Scott Alexander had a nice piece on this:
If you don’t know who to believe then falling back on prediction markets or at least expert consensus is not the worst strategy.
In this case prediction markets will be predictably over-optimistic, and expert consensus is very split.