I think there’s another, more fundamental reason why Aumann agreement doesn’t matter in practice. It requires each party to assume the other is completely rational and honest.
Acting as if the other party is rational is good for promoting calm and reasonable discussion. Seriously considering the possibility that the other party is rational is certainly valuable. But assuming that the other party is in fact totally rational is just silly. We know we’re talking to other flawed human beings, and either or both of us might just be totally off base, even if we’re hanging around on a rationality discussion board.
However, surely there are possible agents who are major fans of Bayesian statistics who don’t have the time or motive to share their knowledge with other agents. Indeed, they may actively spread disinformation to other agents in order to manipulate them. Those folk are not bound to agree with other agents when they meet them.
Maybe I lack imagination—is it possible for a strict Bayesian to do anything but seek and share the truth (assuming he is interacting with other Bayesians)?
Bayes rule is about how to update your estimates of the probability of hypotheses on the basis of incoming data. It has nothing to say about an agent’s goal, or how it behaves. Agents can employ Bayesian statistics to update their world view while pursuing literally any goal.
If you think the term “Bayesian” implies an agent whose goal necessarily involves spreading truth to other agents, I have to ask for your references for that idea.
I am looking at the world around me, at the definition of Bayesian, and assuming the process has been going on in an agent for long enough for it to be properly called “a Bayesian agent”, and think to myself—the agent space I end up in, has certain properties.
Of course, I’m using the phrase “Bayesian agent” to mean something slightly different than what the original poster intended.
Of course the agent space you end up in, has certain properties—but the issue is whether those properties necessarily involve sharing the truth with others.
I figure you can pursue any goal using Bayesian statistics—including goals that include attempting to deceive and mislead others.
For example, a Bayesian public relations officer for big tobacco would not be bound to agree with other agents that she met.
You’re speaking of Bayesian agents as a general term to refer to anyone who happens to use Bayesian statistics for a specific purpose—and in that context, I agree with you. In that context, your statements are correct, by definition.
I am speaking of Bayesian agents using the idealized, Hollywood concept of agent. Maybe I should have been more specific and referred to super-agents, equivalent to super-spies.
I claim that someone who has lived and breathed the Bayes way will be significantly different than someone who has applied it, even very consistently, within a limited domain. For example, I can imagine a Bayesian super-agent working for big tobacco, but I see the probability of that event actually coming to pass as too small to be worth considering.
I’ve seen the paper, but it assumes the point in question in the definition of partially rational agents in the very first paragraph:
If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then… [conclusion follows].
But peoples’ estimates generally aren’t consistent with his constraints, so even for someone who is sufficiently rational, it doesn’t make any sense whatsoever to assume that everyone else is.
This doesn’t mean Robin’s paper is wrong. It just means that faced with a topic where we would “agree to disagree”, you can either update your belief about the topic, or update your belief about whether both of us are rational enough for the proof to apply.
[...] requires each party to assume the other is completely rational and honest. [/..] But assuming that the other party is in fact totally rational is just silly.
Assuming honesty is pretty problematical, too. In real-world disputes, participants are likely to disagree about what constitutes evidence (“the Bible says..”), aren’t rational, and suspect each others honesty.
I think there’s another, more fundamental reason why Aumann agreement doesn’t matter in practice. It requires each party to assume the other is completely rational and honest.
Acting as if the other party is rational is good for promoting calm and reasonable discussion. Seriously considering the possibility that the other party is rational is certainly valuable. But assuming that the other party is in fact totally rational is just silly. We know we’re talking to other flawed human beings, and either or both of us might just be totally off base, even if we’re hanging around on a rationality discussion board.
I believe Hanson’s paper on ‘Bayesian wannabes’ shows that even only partially rational agents must agree about a lot.
Jaw-droppingly (for me), that paper apparently uses “Bayesians” to refer to agents whose primary goal involves seeking (and sharing) the truth.
IMO, “Bayesians” should refer to agents that employ Bayesian statistics, regardless of what their goals are.
That Hanson casually employs this other definition without discussing the issue or defending his usage says a lot about his attitude to the subject.
I assume this just means that their primary epistemic goal is such, not that this is their utility function.
That’s why I used the word “involves”.
However, surely there are possible agents who are major fans of Bayesian statistics who don’t have the time or motive to share their knowledge with other agents. Indeed, they may actively spread disinformation to other agents in order to manipulate them. Those folk are not bound to agree with other agents when they meet them.
Won’t the utility function eventually update to match?
Maybe I lack imagination—is it possible for a strict Bayesian to do anything but seek and share the truth (assuming he is interacting with other Bayesians)?
Bayes rule is about how to update your estimates of the probability of hypotheses on the basis of incoming data. It has nothing to say about an agent’s goal, or how it behaves. Agents can employ Bayesian statistics to update their world view while pursuing literally any goal.
If you think the term “Bayesian” implies an agent whose goal necessarily involves spreading truth to other agents, I have to ask for your references for that idea.
I am looking at the world around me, at the definition of Bayesian, and assuming the process has been going on in an agent for long enough for it to be properly called “a Bayesian agent”, and think to myself—the agent space I end up in, has certain properties.
Of course, I’m using the phrase “Bayesian agent” to mean something slightly different than what the original poster intended.
Of course the agent space you end up in, has certain properties—but the issue is whether those properties necessarily involve sharing the truth with others.
I figure you can pursue any goal using Bayesian statistics—including goals that include attempting to deceive and mislead others.
For example, a Bayesian public relations officer for big tobacco would not be bound to agree with other agents that she met.
You’re speaking of Bayesian agents as a general term to refer to anyone who happens to use Bayesian statistics for a specific purpose—and in that context, I agree with you. In that context, your statements are correct, by definition.
I am speaking of Bayesian agents using the idealized, Hollywood concept of agent. Maybe I should have been more specific and referred to super-agents, equivalent to super-spies.
I claim that someone who has lived and breathed the Bayes way will be significantly different than someone who has applied it, even very consistently, within a limited domain. For example, I can imagine a Bayesian super-agent working for big tobacco, but I see the probability of that event actually coming to pass as too small to be worth considering.
I don’t really know what you mean. A “super agent”? Do you really think Bayesian agents are “good”?
Since you haven’t really said what you mean, what do you mean? What are these “super agents” of which you speak? Would you know one if you met one?
Super-agent. You know, like James Bond, Mr. and Ms. Smith. Closer to the use, in context—Jeffreyssai.
Right… So: how about Lex Luthor or General Zod?
I’ve seen the paper, but it assumes the point in question in the definition of partially rational agents in the very first paragraph:
But peoples’ estimates generally aren’t consistent with his constraints, so even for someone who is sufficiently rational, it doesn’t make any sense whatsoever to assume that everyone else is.
This doesn’t mean Robin’s paper is wrong. It just means that faced with a topic where we would “agree to disagree”, you can either update your belief about the topic, or update your belief about whether both of us are rational enough for the proof to apply.
Assuming honesty is pretty problematical, too. In real-world disputes, participants are likely to disagree about what constitutes evidence (“the Bible says..”), aren’t rational, and suspect each others honesty.