but surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence?
You’re right, sometimes the agreement protocol terminates before the agents fully reconstruct each other’s evidence, and they end up with a different agreed probability than if they just shared evidence.
But my point was mainly that exchanging information like this by repeatedly updating on each other’s posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he’s telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don’t think humans can benefit from them because it’s too hard to do these logical deductions in our heads.
Also, it seems pretty obvious that you can’t offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can’t compute the posterior probability of either of them, given an announcement from the other.
It might be that a specialized “disagreement arbitrator” can still play some useful role, but I don’t see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.
You’re right, sometimes the agreement protocol terminates before the agents fully reconstruct each other’s evidence, and they end up with a different agreed probability than if they just shared evidence.
But my point was mainly that exchanging information like this by repeatedly updating on each other’s posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he’s telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don’t think humans can benefit from them because it’s too hard to do these logical deductions in our heads.
Also, it seems pretty obvious that you can’t offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can’t compute the posterior probability of either of them, given an announcement from the other.
It might be that a specialized “disagreement arbitrator” can still play some useful role, but I don’t see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.