It’s a shame that in practice Aumann Agreement is expensive, but we should try to encourage Aumann-like updating whenever possible.
While, as I pointed out in my previous shortform, Aumann Agreement is neither cheap nor free, it’s powerful that simply by repeatedly mutually communicating the fact that they have opposing beliefs, two people can come to arrive at (in theory) the same beliefs together, that they would have if they had access to all the information the other person has, even without being aware of the specific information the other person has.
While it’s not strictly necessary, Aumann’s proof of the Agreement Theorem assumes that A) both agents are both honest and rational, and importantly: B) both agents are aware that the other is honest and rational (and furthermore, that the other agent knows that they know they are rational, and so on). In other words, the rationality and honesty of each agent is presumed to be common knowledge between both agents.
In real life, I often have conversations with people (even sometimes on LW) who I’m not sure are honest, or rational, and who I’m not sure consider me to be honest and rational. Lack of common knowledge of honesty is a deal-breaker, and the lack of common knowledge of rationality, while not a deal-breaker, slows the (already cumbersome) Aumann process down quite a bit.
So, I invite you to ask: How can we build common knowledge of our rationality and honesty? I’ve already posted one shortform on this subject, but there’s more to be said.
I don’t think there’s any shortcut. We’ll have to first become rational and honest, and then demonstrate that we’re rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.
Not sure I agree with you here. Well, I do agree that the only practical way I can think of to demonstrate honesty is to actually be honest, and gain a reputation for honesty. However, I do think there are ways to augment that process: right now, I can observe people being honest when I engage with their ideas, verify their statements myself, and update for the future that they seem honest; however, this is something that I generally have to do for myself, and if someone else comes along and engages with the same person, they have to verify the statements all over again for themselves; multiply this across hundreds or thousands of people, and you’re wasting a lot of time; and I can only build trust based on content that I have engaged with; even if a person has a large backlog of honest communication, if I don’t engage with that backlog, I will end up trusting that person less than they deserve. If there are people who I already know I can trust, it’s possible to use their assignment of trust to give trust to people who I otherwise wouldn’t be able to. There are ways to streamline that.
Regarding rationality, since rationality is not a single trait or skill, but rather many traits and skills, there is no single way to reliably signal the entirety of rationality; however, each individual trait and skill can reliably be signaled in a way that can facilitate building of trust. As one example, if there existed a test that required an ability to robustly engage with the ideas communicated in Yudkowsky’s sequences, if I noticed that somebody had passed this test, I would be willing to update on that person’s statements more than if I didn’t know they were capable of passing this test. (I anticipate that people reading this right now will object that test generally aren’t reliable signals, and that people often forget what they are tested on. To the first objection, I have many thoughts on robust testing that I have yet to share, and haven’t seen written elsewhere to my knowledge, and my thoughts on this subject are too long to write in this margin. Regarding forgetting, spaced repetition is the obvious answer)
It’s a shame that in practice Aumann Agreement is expensive, but we should try to encourage Aumann-like updating whenever possible.
While, as I pointed out in my previous shortform, Aumann Agreement is neither cheap nor free, it’s powerful that simply by repeatedly mutually communicating the fact that they have opposing beliefs, two people can come to arrive at (in theory) the same beliefs together, that they would have if they had access to all the information the other person has, even without being aware of the specific information the other person has.
While it’s not strictly necessary, Aumann’s proof of the Agreement Theorem assumes that A) both agents are both honest and rational, and importantly: B) both agents are aware that the other is honest and rational (and furthermore, that the other agent knows that they know they are rational, and so on). In other words, the rationality and honesty of each agent is presumed to be common knowledge between both agents.
In real life, I often have conversations with people (even sometimes on LW) who I’m not sure are honest, or rational, and who I’m not sure consider me to be honest and rational. Lack of common knowledge of honesty is a deal-breaker, and the lack of common knowledge of rationality, while not a deal-breaker, slows the (already cumbersome) Aumann process down quite a bit.
So, I invite you to ask: How can we build common knowledge of our rationality and honesty? I’ve already posted one shortform on this subject, but there’s more to be said.
I don’t think there’s any shortcut. We’ll have to first become rational and honest, and then demonstrate that we’re rational and honest by talking about many different uncertainties and disagreements in a rational and honest manner.
Not sure I agree with you here. Well, I do agree that the only practical way I can think of to demonstrate honesty is to actually be honest, and gain a reputation for honesty. However, I do think there are ways to augment that process: right now, I can observe people being honest when I engage with their ideas, verify their statements myself, and update for the future that they seem honest; however, this is something that I generally have to do for myself, and if someone else comes along and engages with the same person, they have to verify the statements all over again for themselves; multiply this across hundreds or thousands of people, and you’re wasting a lot of time; and I can only build trust based on content that I have engaged with; even if a person has a large backlog of honest communication, if I don’t engage with that backlog, I will end up trusting that person less than they deserve. If there are people who I already know I can trust, it’s possible to use their assignment of trust to give trust to people who I otherwise wouldn’t be able to. There are ways to streamline that.
Regarding rationality, since rationality is not a single trait or skill, but rather many traits and skills, there is no single way to reliably signal the entirety of rationality; however, each individual trait and skill can reliably be signaled in a way that can facilitate building of trust. As one example, if there existed a test that required an ability to robustly engage with the ideas communicated in Yudkowsky’s sequences, if I noticed that somebody had passed this test, I would be willing to update on that person’s statements more than if I didn’t know they were capable of passing this test. (I anticipate that people reading this right now will object that test generally aren’t reliable signals, and that people often forget what they are tested on. To the first objection, I have many thoughts on robust testing that I have yet to share, and haven’t seen written elsewhere to my knowledge, and my thoughts on this subject are too long to write in this margin. Regarding forgetting, spaced repetition is the obvious answer)