If I remember correctly, Jaynes discusses this in Probability Theory and arrives at the conclusion that if a reasoning robot assigned a probability to its changing its mind a certain way, then it should update its belief now.
Of course, the general caveat here: humans are not robots, they don’t perfectly adhere to either formal logical or plausible reasoning.
If I remember correctly, Jaynes discusses this in Probability Theory
He does, it’s in the chapter about Ap distribution, which are basically meta-probability, or better, Ap is the probability assigned to receive a future evidence that will put the probability of A at p. Formally P(A|Ap) = p. From this you can show that P(A) is the expected value of the Ap distribution.
The outer robot, thinking about the real world, uses Aristotelian propositions referring to that world. The inner robot, thinking about the activities of the outer robot, uses propositions that are not Aristotelian in reference to the outer world; but they are still Aristotelian in its context, in reference to the thinking of the outer robot; so of course the same rules of probability theory will apply to them. The term `probability of a probability’ misses the point, since the two probabilities are at different levels.
This always seemed like a real promising idea to me. Alas, I have a day job, and it isn’t as a Prof.
Ideally, your current probability should include the probabilility-weighted average of all possible future evidence. This is required for consistency of probability across those evidence-producing timelines. Collectively, the set of probabilities of future experiences is your prior.
But this article isn’t talking about belief or decision-making, it’s talking about communication (and perhaps encoding in a limited storage mechanism like a brain). You really don’t have the power to do that calculation well, nor to communicate in this level of detail. The idea of a probability range or probability curve is one reasonable (IMO) way to summarize a large set of partly-correlated future evidence.
If I remember correctly, Jaynes discusses this in Probability Theory and arrives at the conclusion that if a reasoning robot assigned a probability to its changing its mind a certain way, then it should update its belief now. Of course, the general caveat here: humans are not robots, they don’t perfectly adhere to either formal logical or plausible reasoning.
He does, it’s in the chapter about Ap distribution, which are basically meta-probability, or better, Ap is the probability assigned to receive a future evidence that will put the probability of A at p. Formally P(A|Ap) = p.
From this you can show that P(A) is the expected value of the Ap distribution.
The Chapter is “Inner and Outer Robots”, available here:
http://www-biba.inrialpes.fr/Jaynes/cc18i.pdf
This always seemed like a real promising idea to me. Alas, I have a day job, and it isn’t as a Prof.
Ideally, your current probability should include the probabilility-weighted average of all possible future evidence. This is required for consistency of probability across those evidence-producing timelines. Collectively, the set of probabilities of future experiences is your prior.
But this article isn’t talking about belief or decision-making, it’s talking about communication (and perhaps encoding in a limited storage mechanism like a brain). You really don’t have the power to do that calculation well, nor to communicate in this level of detail. The idea of a probability range or probability curve is one reasonable (IMO) way to summarize a large set of partly-correlated future evidence.