Someone else had pointed out in your previously linked comment “Confirmation Bias As Misfire Of Normal Bayesian Reasoning” that Jaynes had analyzed how we don’t necessarily converge even in the long run to the same conclusions based on data if we start with different priors. We can diverge instead of converge.
Jaynes hits on a particular problem for truth convergence in politics—trust. We don’t experience and witness events themselves, but only receive reports of them from others. Reports that contradict our priors on the facts can be explained by increasing our priors on the reported facts or downgrading our priors on the honesty of the reporter.
I’m not religious, but I’ve come to appreciate how Christianity got one thing very right—false witness is a sin. It’s a malignant societal cancer. Condemnation of false witness is not a universal value.
I think Jaynes argues exactly this in his textbook on the Bayesian approach to probability “Probability Theory:The Logic of Science”, in a section called “Converging and Diverging views”, which can be found in this copy of Chapter 5
Out of curiosity, suppose you record every datapoint used to generate these priors (and every subsequent datapoint). How do you make AI systems that don’t fall into this trap?
My first guess is it’s a problem in the same class as where when training neural networks, the starting random values are the prior. And therefore some networks will never converge on a good answer simply because they start with incorrect priors. So you have to roll the dice many more times on the initialization.
Someone else had pointed out in your previously linked comment “Confirmation Bias As Misfire Of Normal Bayesian Reasoning” that Jaynes had analyzed how we don’t necessarily converge even in the long run to the same conclusions based on data if we start with different priors. We can diverge instead of converge.
Jaynes hits on a particular problem for truth convergence in politics—trust. We don’t experience and witness events themselves, but only receive reports of them from others. Reports that contradict our priors on the facts can be explained by increasing our priors on the reported facts or downgrading our priors on the honesty of the reporter.
I’m not religious, but I’ve come to appreciate how Christianity got one thing very right—false witness is a sin. It’s a malignant societal cancer. Condemnation of false witness is not a universal value.
ajbFebruary 13, 2020 at 2:05 pm
I think Jaynes argues exactly this in his textbook on the Bayesian approach to probability “Probability Theory:The Logic of Science”,
in a section called “Converging and Diverging views”, which can be found in this copy of Chapter 5
http://www2.geog.ucl.ac.uk/~mdisney/teaching/GEOGG121/bayes/jaynes/cc5d.pdf
Out of curiosity, suppose you record every datapoint used to generate these priors (and every subsequent datapoint). How do you make AI systems that don’t fall into this trap?
My first guess is it’s a problem in the same class as where when training neural networks, the starting random values are the prior. And therefore some networks will never converge on a good answer simply because they start with incorrect priors. So you have to roll the dice many more times on the initialization.