Method 3 chooses the unique f such that updating p to f and updating q to f require the same amount of information.
Before reading GreedyAlgorithm’s post, I decided independently that I support method 3, although I may find another answer I like better. Methods 1 and 2 I do not like, because if you let p=1 and q=1/2, they do not give all the money to EY. Method 4 I do not like, because I think that f(p,p) should equal p. However, I have no argument for 3 other than the fact that it feels right, and it meets all the criteria I can think of in degenerate cases.
Method 4 I do not like, because I think that f(p,p) should equal p.
Why? If Eliezer and Nick independently give 60% probability to the money being Eliezer’s, my posterior probability estimate for that would be higher than that. (OTOH, there’s the question of how independent their estimates actually are.)
There is a question of how independent their estimates are, and I think that the algorithm should be consistent under being repeatedly applied. If EY and NB update their probabilities to the same thing, and then try to update again, their estimates should not change.
In my opinion, the question should not about how to apply the Aumann agreement theorem, but how to compromise. That is the spirit of #2 and #3. They attempt to find the average value. (The difference is that one measures thinks that the scale that should be used is p, and the other thinks it is log(p/(1-p)).)
I do not think that this question has a unique solution. Probability doesn’t give us an answer. We are trying to determine what is fair. I think that my position is that the fair thing to do is to follow the result of the g question. The g question tells us how to combine probabilities without information of how independent they are if the goals and beliefs belong to a single person. If we have two people who trust each other and do not want more than their share, then they should adopt the same probability as if they were one person.
For the question on the g function, it is not about what is fair, and instead about what is safe. If I am going to prescribe a general rule on how to combine these estimates that does not know how independent they are, I want it to be consistent under repeated application so I don’t send all of my probabilities off to 1 when I shouldn’t.
Method 3 chooses the unique f such that updating p to f and updating q to f require the same amount of information.
Before reading GreedyAlgorithm’s post, I decided independently that I support method 3, although I may find another answer I like better. Methods 1 and 2 I do not like, because if you let p=1 and q=1/2, they do not give all the money to EY. Method 4 I do not like, because I think that f(p,p) should equal p. However, I have no argument for 3 other than the fact that it feels right, and it meets all the criteria I can think of in degenerate cases.
Why? If Eliezer and Nick independently give 60% probability to the money being Eliezer’s, my posterior probability estimate for that would be higher than that. (OTOH, there’s the question of how independent their estimates actually are.)
There is a question of how independent their estimates are, and I think that the algorithm should be consistent under being repeatedly applied. If EY and NB update their probabilities to the same thing, and then try to update again, their estimates should not change.
In my opinion, the question should not about how to apply the Aumann agreement theorem, but how to compromise. That is the spirit of #2 and #3. They attempt to find the average value. (The difference is that one measures thinks that the scale that should be used is p, and the other thinks it is log(p/(1-p)).)
I do not think that this question has a unique solution. Probability doesn’t give us an answer. We are trying to determine what is fair. I think that my position is that the fair thing to do is to follow the result of the g question. The g question tells us how to combine probabilities without information of how independent they are if the goals and beliefs belong to a single person. If we have two people who trust each other and do not want more than their share, then they should adopt the same probability as if they were one person.
For the question on the g function, it is not about what is fair, and instead about what is safe. If I am going to prescribe a general rule on how to combine these estimates that does not know how independent they are, I want it to be consistent under repeated application so I don’t send all of my probabilities off to 1 when I shouldn’t.