Also, would it be more right to shift probability distribution towards the other person’s beliefs or towards uncertainty (unitary distribution of probabilities, AFAIU
Interesting thought.
First, uncertainty isn’t always a uniform distribution, it depends on your basic information (simple things like scale-invariance will make it not uniform, for example). But if the value is in some finite range, the expected value for the uncertain distribution is probably in the middle.
So the thought becomes “can someone espousing a more extreme view make your view more moderate?”
If the other person was perfect, the answer would clearly be “no.” Evidence is evidence. If they have evidence, and they tell you the evidence, now you have the evidence too.
If we model people as making mistakes sometimes, and drawing their answer from the uncertain distribution when they make a mistake, it seems like it could happen. However, it would require you to change what you think the chance of making a mistake is by some amount large enough to counteract the weight other person’s view.
It’s a little tough to find real-world examples of this, because it can only happen in the right class of problems: if you get it wrong you have to get it totally wrong. A novel math calculation might fit the bill, but even then there are reasons to not be totally wrong when you’re wrong.
Interesting thought.
First, uncertainty isn’t always a uniform distribution, it depends on your basic information (simple things like scale-invariance will make it not uniform, for example). But if the value is in some finite range, the expected value for the uncertain distribution is probably in the middle.
So the thought becomes “can someone espousing a more extreme view make your view more moderate?”
If the other person was perfect, the answer would clearly be “no.” Evidence is evidence. If they have evidence, and they tell you the evidence, now you have the evidence too.
If we model people as making mistakes sometimes, and drawing their answer from the uncertain distribution when they make a mistake, it seems like it could happen. However, it would require you to change what you think the chance of making a mistake is by some amount large enough to counteract the weight other person’s view.
It’s a little tough to find real-world examples of this, because it can only happen in the right class of problems: if you get it wrong you have to get it totally wrong. A novel math calculation might fit the bill, but even then there are reasons to not be totally wrong when you’re wrong.