I’ve often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it’s deleted from the public pool of reliable knowledge.
But yes, you could get around it by constructing a clear chain of inferences that’s publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)
But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it’s a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we’d end up back on square one.
I’ve often had the thought that controversial topics may just be unknowable: as soon as a topic becomes controversial, it’s deleted from the public pool of reliable knowledge.
But yes, you could get around it by constructing a clear chain of inferences that’s publicly debuggable. (Ideally a Bayesian network: just input your own priors and see what comes out.)
But that invites a new kind of adversary, because a treasure map to the truth also works in reverse: it’s a treasure map to exactly what facts need to be faked, if you want to fool many smart people. I worry we’d end up back on square one.