Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people’s opinions and arguments? This community already hopefully accepts that one can learn from knowing other people’s opinions without knowing their arguments; Aumann’s agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.
This community already hopefully accepts that one can learn from knowing other people’s opinions without knowing their arguments; Aumann’s agreement theorem, and so forth.
I don’t think Aumann’s agreement theorem has anything to do with taking people’s opinions as evidence. Aumann’s agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others’ beliefs.
More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for “boo disagreement”, having nothing to do with the theorem itself. It’s easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.
People sometimes use “Aumann’s agreement theorem” to mean “the idea that you should update on other people’s opinions”, and I agree this is inaccurate and it’s not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?
I think LWers have been using “Aumann agreement” to refer to the whole literature spawned by Aumann’s original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I’m not sure if it’s standard outside of our community.
This community already hopefully accepts that one can learn from knowing other people’s opinions without knowing their arguments
But in such methods, the agents aren’t just moving closer to each other’s beliefs. Rather, they go through convoluted chains of deduction to infer what information the other agent must have observed, given his declarations, and then update on that new information. The two agents essentially still have to communicate I(w) and J(w) to each other, except they do so by exchanging posterior probabilities and making logical inferences from them.
Is there a result in the literature that shows something closer to your “one can learn from knowing other people’s opinions without knowing their arguments”?
I haven’t read your post and my understanding is still hazy, but surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence? If they do, then I don’t see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I’m comfortable calling it “updating on each other’s opinions”.
Regardless of Aumann-like results, I don’t see how:
one can learn from knowing other people’s opinions without knowing their arguments
could possibly be controversial here, as long as people’s opinions probabilistically depend on the truth.
but surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence?
You’re right, sometimes the agreement protocol terminates before the agents fully reconstruct each other’s evidence, and they end up with a different agreed probability than if they just shared evidence.
But my point was mainly that exchanging information like this by repeatedly updating on each other’s posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he’s telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don’t think humans can benefit from them because it’s too hard to do these logical deductions in our heads.
Also, it seems pretty obvious that you can’t offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can’t compute the posterior probability of either of them, given an announcement from the other.
It might be that a specialized “disagreement arbitrator” can still play some useful role, but I don’t see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.
… surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence?
They don’t necessarily reconstruct all of each other’s evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent’s evidence samples (“4 reds and 4 blacks”), but they cannot reconstruct the exact sequences (“RRBRBBRB”). And they can update again to perfect agreement regarding the urn contents.
Edit: minor cleanup for clarity.
At least that is my understanding of Aumann’s theorem.
That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.
Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.
In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
That comment leaves me wondering what “pure Bayesianism” is.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
By “pure Bayesianism”, I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled “Comments” and particularly the subsection at the very end entitled “Another dimension?”. A pure “Jaynes Bayesian” seeks the truth, not because it is useful, but rather because it is truth.
By contrast, we might consider a “de Finetti Bayesian” who seeks the truth so as not to lose bets to Dutch bookies, or a “Wald Bayesian” who seeks truth to avoid loss of utility.
The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.
A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies—not on seeking holy grails.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of ‘should’ can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.
You have to also be able to deduce how much of the other agent’s information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.
People sometimes use “Aumann’s agreement theorem” to mean “the idea that you should update on other people’s opinions”, and I agree this is inaccurate and it’s not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating.
The theorem doesn’t involve any updating, so it’s not a salient example in discussion of updating, much less proxy for that.
Should I have said Geanakoplos and Polemarchakis?
To answer literally, simply not mentioning the theorem would’ve done the trick, since there didn’t seem to be a need for elaboration.
I’m not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I’d love to hear them. Including any reasons why it might fail. Because I think it’s an interesting solution to the ‘how to efficiently debate reasonably’.
Does anyone else think it would be immensely valuable if we had someone specialized (more so than anyone currently is) at extracting trustworthy, disinterested, x-rationality-informed probability estimates from relevant people’s opinions and arguments? This community already hopefully accepts that one can learn from knowing other people’s opinions without knowing their arguments; Aumann’s agreement theorem, and so forth. It seems likely to me that centralizing that whole aspect of things would save a ton of duplicated effort.
I don’t think Aumann’s agreement theorem has anything to do with taking people’s opinions as evidence. Aumann’s agreement theorem is about agents turning out to have been agreeing all along, given certain conditions, not about how to come to an agreement, or worse how to enforce agreement by responding to others’ beliefs.
More generally (as in, not about this particular comment), the mentions of this theorem on LW seem to have degenerated into applause lights for “boo disagreement”, having nothing to do with the theorem itself. It’s easier to use the associated label, even if such usage would be incorrect, but one should resist the temptation.
People sometimes use “Aumann’s agreement theorem” to mean “the idea that you should update on other people’s opinions”, and I agree this is inaccurate and it’s not what I meant to say, but surely the theorem is a salient example that implicitly involves such updating. Should I have said Geanakoplos and Polemarchakis?
I think LWers have been using “Aumann agreement” to refer to the whole literature spawned by Aumann’s original paper, which includes explicit protocols for Bayesians to reach agreement. This usage seems reasonable, although I’m not sure if it’s standard outside of our community.
I’m not sure this is right… Here’s what I wrote in Probability Space & Aumann Agreement:
Is there a result in the literature that shows something closer to your “one can learn from knowing other people’s opinions without knowing their arguments”?
I haven’t read your post and my understanding is still hazy, but surely at least the theorems don’t depend on the agents being able to fully reconstruct each other’s evidence? If they do, then I don’t see how it could be true that the probability the agents end up agreeing on is sometimes different from the one they would have had if they were able to share information. In this sort of setting I think I’m comfortable calling it “updating on each other’s opinions”.
Regardless of Aumann-like results, I don’t see how:
could possibly be controversial here, as long as people’s opinions probabilistically depend on the truth.
You’re right, sometimes the agreement protocol terminates before the agents fully reconstruct each other’s evidence, and they end up with a different agreed probability than if they just shared evidence.
But my point was mainly that exchanging information like this by repeatedly updating on each other’s posterior probabilities is not any easier than just sharing evidence/arguments. You have to go through these convoluted logical deductions to try to infer what evidence the other guy might have seen or what argument he might be thinking of, given the probability he’s telling you. Why not just tell each other what you saw or what your arguments are? Some of these protocols might be useful for artificial agents in situations where computation is cheap and bandwidth is expensive, but I don’t think humans can benefit from them because it’s too hard to do these logical deductions in our heads.
Also, it seems pretty obvious that you can’t offload the computational complexity of these protocols onto a third party. The problem is that the third party does not have full information of either of the original parties, so he can’t compute the posterior probability of either of them, given an announcement from the other.
It might be that a specialized “disagreement arbitrator” can still play some useful role, but I don’t see any existing theory on how it might do so. Somebody would have to invent that theory first, I think.
They don’t necessarily reconstruct all of each other’s evidence, just the parts that are relevant to their common knowledge. For example, two agents have common priors regarding the contents of an urn. Independently, they sample from the urn with replacement. They then exchange updated probabilities for P(Urn has Freq(red)<Freq(black)) and P(Urn has Freq(red)<0.9*Freq(black)). At this point, each can reconstruct the sizes and frequencies of the other agent’s evidence samples (“4 reds and 4 blacks”), but they cannot reconstruct the exact sequences (“RRBRBBRB”). And they can update again to perfect agreement regarding the urn contents.
Edit: minor cleanup for clarity.
At least that is my understanding of Aumann’s theorem.
That sounds right, but I was thinking of cases like this, where the whole process leads to a different (worse) answer than sharing information would have.
Hmmm. It appears that in that (Venus, Mars) case, the agents should be exchanging questions as well as answers. They are both concerned regarding catastrophe, but confused regarding planets. So, if they tell each other what confuses them, they will efficiently communicate the important information.
In some ways, and contrary to Jaynes, I think that pure Bayesianism is flawed in that it fails to attach value to information. Certainly, agents with limited communication channel capacity should not waste bandwidth exchanging valueless information.
That comment leaves me wondering what “pure Bayesianism” is.
I don’t think Bayesianism is a recipe for action in the first place—so how can “pure Bayesianism” be telling agents how they should be spending their time?
By “pure Bayesianism”, I meant the attitude expressed in Chapter 13 of Jaynes, near the end in the section entitled “Comments” and particularly the subsection at the very end entitled “Another dimension?”. A pure “Jaynes Bayesian” seeks the truth, not because it is useful, but rather because it is truth.
By contrast, we might consider a “de Finetti Bayesian” who seeks the truth so as not to lose bets to Dutch bookies, or a “Wald Bayesian” who seeks truth to avoid loss of utility. The Wald Bayesian clearly is looking for a recipe for action, and the de Finetti Bayesian seeks at least a recipe for gambling.
A truth seeker! Truth seeking is certainly pretty bizarre and unbiological. Agents can normally be expected to concentrate on making babies—not on seeking holy grails.
It tells them everything. That includes inferences right down to their own cognitive hardware and implications thereof. Given that the very meaning of ‘should’ can be reduced down to cognitions of the speaker Bayesian reasoning is applicable.
Hi! As brief feedback, I was trying to find out what “pure Bayesianism” was being used to mean—so this didn’t help too much.
for an ideal Bayesian, I think ‘one can learn from X’ is categorically true for all X....
You have to also be able to deduce how much of the other agent’s information is shared with you. If you and them got your posteriors by reading the same blogs and watching the same TV shows, then this is very different from the case when you reached the same conclusion from completely different channels.
Somewhere in there is a joke about the consequences of a sedentary lifestyle.
The theorem doesn’t involve any updating, so it’s not a salient example in discussion of updating, much less proxy for that.
To answer literally, simply not mentioning the theorem would’ve done the trick, since there didn’t seem to be a need for elaboration.
For other people’s opinions, perhaps see: http://www.takeonit.com/
I’m not sure about having a centralised group doing this but I did experiment with making a tool that could help infer consequences from beliefs. Imagine something a little like this but with chains of philosophical statements that have degrees of confidence. Users would assign confidence to axioms and construct trees of argument using them. The system would automatically determine confidences of conclusions. It could even exist as a competitive game with a community determining confidence of axioms. It could also be used to rapidly determine differences in opinion i.e. infer the main inferred points of contention based on different axiom weightings. If anyone knows of anything similar or has suggestions for such a system I’d love to hear them. Including any reasons why it might fail. Because I think it’s an interesting solution to the ‘how to efficiently debate reasonably’.