Oops! (that last post was not intended to test anyone’s psychic ability)
The problem of Bayesian reasoning is in the setting of prior probability. There is some self correction built in, so it is a better system than most (or any other if you prefer), but a particular problem raises its ugly that is relevant to overcoming bias.
Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero.
What would be the proper gradient to approach the subject in such a case?
Oops! (that last post was not intended to test anyone’s psychic ability) The problem of Bayesian reasoning is in the setting of prior probability. There is some self correction built in, so it is a better system than most (or any other if you prefer), but a particular problem raises its ugly that is relevant to overcoming bias. Suppose I want to discuss a particular phenomena or idea with a Bayesian. Suppose this Bayesian has set the prior probability of this phenomena or idea at zero. What would be the proper gradient to approach the subject in such a case?