remember that Bayesian evidence never reaches 100%, thus making middle ground- upon hearing another rationalist’s viewpoint, instead of not shifting (as you suggest) or shifting to average your estimate and theirs together (as AAT suggests) why not adjust your viewpoint based on how likely the other rationalist is to have assessed correctly? ie- you believe X is 90% likely to be true the other rationalist believes it’s not true 90%. suppose this rationalist is very reliable, say in the neighborhood of 75% accurate, you should adjust your viewpoint down to X is 75% likely to be 10% likely to be true, and 25% likely to be 90% likely to be true (or around 30% likely, assuming I did my math right.) assume he’s not very reliable, say a creationist talking about evolution. let’s say 10%. you should adjust to X is 10% likely to be 10% likely and 90% likely to be 90% likely. (82%) …of course this doesn’t factor in your own fallibility.
remember that Bayesian evidence never reaches 100%, thus making middle ground- upon hearing another rationalist’s viewpoint, instead of not shifting (as you suggest) or shifting to average your estimate and theirs together (as AAT suggests) why not adjust your viewpoint based on how likely the other rationalist is to have assessed correctly? ie- you believe X is 90% likely to be true the other rationalist believes it’s not true 90%. suppose this rationalist is very reliable, say in the neighborhood of 75% accurate, you should adjust your viewpoint down to X is 75% likely to be 10% likely to be true, and 25% likely to be 90% likely to be true (or around 30% likely, assuming I did my math right.) assume he’s not very reliable, say a creationist talking about evolution. let’s say 10%. you should adjust to X is 10% likely to be 10% likely and 90% likely to be 90% likely. (82%) …of course this doesn’t factor in your own fallibility.