I remember it was a poll in LW about how they use Bayes theorem in practical life (can’t find a link). There was only a few answer about actual practical usage. There is not much practical situations where it was useful.
But it is good as a symbol of group membership and also in internet discussion.
I also think that EY is not Bayesian sometimes. He often assigns something 100 per cent probability without any empirical evidence, but because simplicity and beauty of the theory. For example that MWI is correct interpretation of QM. But if you put 0 probability on something (other interpretations), it can’t be updated by any evidence. He also did it when he said that self-improving paper clip maximizer is the main risk of AI. But there are other risks of AI which are also deadly. (I counted around 100).
I remember it was a poll in LW about how they use Bayes theorem in practical life (can’t find a link). There was only a few answer about actual practical usage. There is not much practical situations where it was useful.
But it is good as a symbol of group membership and also in internet discussion.
I also think that EY is not Bayesian sometimes. He often assigns something 100 per cent probability without any empirical evidence, but because simplicity and beauty of the theory. For example that MWI is correct interpretation of QM. But if you put 0 probability on something (other interpretations), it can’t be updated by any evidence. He also did it when he said that self-improving paper clip maximizer is the main risk of AI. But there are other risks of AI which are also deadly. (I counted around 100).
Is it good to think Bayes is this wonderful summum bonum of rationality, and not even notice how little use you yourself are making of it?
Is it good to come across to someone with a pluralistic understanding of reasoning as a dogmatist?
Elephant spotted.