… surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence?
They don’t necessarily reconstruct all of each other’s evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent’s evidence samples (“4 reds and 4 blacks”), but they cannot reconstruct the exact sequences (“RRBRBBRB”). And they can update again to perfect agreement regarding the urn contents.
Edit: minor cleanup for clarity.
At least that is my understanding of Aumann’s theorem.
That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.
Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.
In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
That comment leaves me wondering what “pure Bayesianism” is.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
By “pure Bayesianism”, I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled “Comments” and particularly the subsection at the very end entitled “Another dimension?”. A pure “Jaynes Bayesian” seeks the truth, not because it is useful, but rather because it is truth.
By contrast, we might consider a “de Finetti Bayesian” who seeks the truth so as not to lose bets to Dutch bookies, or a “Wald Bayesian” who seeks truth to avoid loss of utility.
The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.
A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies—not on seeking holy grails.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of ‘should’ can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.
They don’t necessarily reconstruct all of each other’s evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent’s evidence samples (“4 reds and 4 blacks”), but they cannot reconstruct the exact sequences (“RRBRBBRB”). And they can update again to perfect agreement regarding the urn contents.
Edit: minor cleanup for clarity.
At least that is my understanding of Aumann’s theorem.
That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.
Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.
In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
That comment leaves me wondering what “pure Bayesianism” is.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
By “pure Bayesianism”, I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled “Comments” and particularly the subsection at the very end entitled “Another dimension?”. A pure “Jaynes Bayesian” seeks the truth, not because it is useful, but rather because it is truth.
By contrast, we might consider a “de Finetti Bayesian” who seeks the truth so as not to lose bets to Dutch bookies, or a “Wald Bayesian” who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.
A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies—not on seeking holy grails.
It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of ‘should’ can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.
Hi! As brief feedback, I was trying to find out what “pure Bayesianism” was being used to mean—so this didn’t help too much.