I don’t recall that desideratum in Jaynes’ derivations. I think it is not needed. Why should it be needed? Certainty about axioms is a million miles from certainty about all their consequences, as seems to be the exact point of your series.
In Jaynes, this is sort of hidden in desideratum 2, “correspondence with common sense.”
The key part is that if two statements are logically equivalent, Cox’s theorem is required to assign them the same probability. Since the axioms of arithmetic and “the axioms of arithmetic, and also 298+587=885,” are logically equivalent, they should be assigned the same probability.
I’m not sure how to help you well beyond that, my pedagogy is weak here.
I’m not sure that’s what Jaynes meant by correspondence with common sense. To me, it’s more reminiscent of his consistency requirements, but I don’t think it is identical to any of them.
Certainly, it is desirable that logically equivalent statements receive the same probability assignment, but I’m not aware that the derivation of Cox’s theorems collapses without this assumption.
Jaynes says, “the robot always represents equivalent states of knowledge by equivalent plausibility assignments.” The problem, of course, is knowing that 2 statements are equivalent—if we don’t know this, we should be allowed to make different probability assignments. Equivalence and known equivalence are, to me, not the same, and Jaynes’ prescriptions seem to refer to the latter. I may know that x = 298 + 587, but not know that x = 885, so I would not be not violating probability theory if I adopted different degrees of belief for these statements.
Note that Jaynes used this consistency requirement to derive such principles as the Bernoulli urn rule, which is very much about symmetry of knowledge, and not about logical equivalence of states.
If you don’t mind, when you say “definitely not clear,” do you mean that you are not certain about this point, or that you are confident, but it’s complicated to explain?
i mean that I’m not very sure where that correspondence comes up in Jaynes, but Jaynes is being less explicit than other derivations, which I am more confident about.
Is there any more straightforward way to see the problem? I argued with you about this for a while and I think you convinced me, but it is still a little foggy. If there is a consistency problem, surely this means that we must be vulnerable to Dutch books doesn’t it? I.e. they would not seem to be Dutch books to us, with our limited resources, but a superior intelligence would know that they were and would use them to con us out of utility. Do you know of some argument like this?
If there is a consistency problem, surely this means that we must be vulnerable to Dutch books doesn’t it? I.e. they would not seem to be Dutch books to us, with our limited resources, but a superior intelligence would know that they were and would use them to con us out of utility.
If I know all the digits of pi and you think they’re evenly distributed past a certain point, I can take your money.
In order to resist this, you need to have a hypothesis for “Manfred will pick the right number”—which, fortunately, is very doable, because the complexity of this hypothesis is only about the complexity of a program that computes the digits of pi.
But nonetheless, until you figure this out, that’s the dutch book.
Lol that is a nice story in that link, but it isn’t a Dutch book. The bet in it isn’t set up to measure subjective probability either, so I don’t really see what the lesson in it is for logical probability.
Say that instead of the digits of pi, we were betting on the contents of some boxes. For concreteness let there be three boxes, one of which contains a prize. Say also that you have looked inside the boxes and know exactly where the prize is. For me, I have some subjective probability P( X_i | I_mine ) that the prize is inside box i. For you, all your subjective probabilities are either zero or one, since you know perfectly well where the prize is. However, if my beliefs about where the prize is follow the probability calculus correctly, you still cannot Dutch book me, even though you know where the prize is and I don’t.
So, how is the scenario about the digits of pi different to this? Do you have some example of an actual Dutch book that I would accept if I were to allow logical uncertainty?
edit:
Ok well I thought of what seems to be a typical Dutch book scenario, but it has made me yet more confused about what is special about the logical uncertainty case. So, let me present two scenarios, and I wonder if you can tell me what the difference is:
Consider two propositions, A and B. Let it be the case that A->B. However, say that we do not realise this, and say we assign the following probabilities to A and B:
P(A) = 0.5
P(B) = 0.5
P(B|A) = P(B)
P(A & B) = 0.25
indicating that we think A and B are independent. Based on these probabilities, we should accept the following arrangement of bets:
Sell bet for $0.50 that A is false, payoff $1 if correct
Sell bet for $0.25 that A & B are both true, payoff $1 if correct
The expected amount we must pay out is 0.5$1 + 0.25$1 = $0.75, which is how much we are selling the bets for, so everything seems fair to us.
Someone who understands that A->B will happily buy these bets from us, since they know that “not A” and “A & B” are actually equivalent to “not A” and “A”, i.e. he knows P(not A) + P(A & B) = 1, so he wins $1 from us no matter what is the case, making a profit of $0.25. So that seems to show that we are being incoherent if we don’t know that A->B.
But now consider the following scenario; instead of having the logical relation that A->B, say that our opponent just has some extra empirical information D that we do not, so that for him P(B|A,D) = 1. For him, then, he would still say that
P(not A | D) + P(A & B | D) = P(not A | D) + P(B|A,D)*P(A|D) = P(not A|D) + P(A|D) = 1
so that we, who do not know D, could still be screwed by the same kind of trade as in the first example. But then, this is sort of obviously possible, since having more information than your opponent should give you a betting advantage. But both situations seem equivalently bad for us, so why are we being incoherent in the first example, but not in the second? Or am I still missing something?
Thanks for taking the time to elaborate.
I don’t recall that desideratum in Jaynes’ derivations. I think it is not needed. Why should it be needed? Certainty about axioms is a million miles from certainty about all their consequences, as seems to be the exact point of your series.
Help me out, what am I not understanding?
In Jaynes, this is sort of hidden in desideratum 2, “correspondence with common sense.”
The key part is that if two statements are logically equivalent, Cox’s theorem is required to assign them the same probability. Since the axioms of arithmetic and “the axioms of arithmetic, and also 298+587=885,” are logically equivalent, they should be assigned the same probability.
I’m not sure how to help you well beyond that, my pedagogy is weak here.
I’m not sure that’s what Jaynes meant by correspondence with common sense. To me, it’s more reminiscent of his consistency requirements, but I don’t think it is identical to any of them.
Certainly, it is desirable that logically equivalent statements receive the same probability assignment, but I’m not aware that the derivation of Cox’s theorems collapses without this assumption.
Jaynes says, “the robot always represents equivalent states of knowledge by equivalent plausibility assignments.” The problem, of course, is knowing that 2 statements are equivalent—if we don’t know this, we should be allowed to make different probability assignments. Equivalence and known equivalence are, to me, not the same, and Jaynes’ prescriptions seem to refer to the latter. I may know that x = 298 + 587, but not know that x = 885, so I would not be not violating probability theory if I adopted different degrees of belief for these statements.
Note that Jaynes used this consistency requirement to derive such principles as the Bernoulli urn rule, which is very much about symmetry of knowledge, and not about logical equivalence of states.
It’s definitely not clear, I’ll admit. And you’re right, it is also a sort of consistency requirement.
Fortunately, I can direct you to section 5 of a more explicit derivation here.
Thanks, I’ll take a look at the article.
If you don’t mind, when you say “definitely not clear,” do you mean that you are not certain about this point, or that you are confident, but it’s complicated to explain?
i mean that I’m not very sure where that correspondence comes up in Jaynes, but Jaynes is being less explicit than other derivations, which I am more confident about.
Is there any more straightforward way to see the problem? I argued with you about this for a while and I think you convinced me, but it is still a little foggy. If there is a consistency problem, surely this means that we must be vulnerable to Dutch books doesn’t it? I.e. they would not seem to be Dutch books to us, with our limited resources, but a superior intelligence would know that they were and would use them to con us out of utility. Do you know of some argument like this?
Yes, this is right. Also, http://www.spaceandgames.com/?p=27 :)
If I know all the digits of pi and you think they’re evenly distributed past a certain point, I can take your money.
In order to resist this, you need to have a hypothesis for “Manfred will pick the right number”—which, fortunately, is very doable, because the complexity of this hypothesis is only about the complexity of a program that computes the digits of pi.
But nonetheless, until you figure this out, that’s the dutch book.
Lol that is a nice story in that link, but it isn’t a Dutch book. The bet in it isn’t set up to measure subjective probability either, so I don’t really see what the lesson in it is for logical probability.
Say that instead of the digits of pi, we were betting on the contents of some boxes. For concreteness let there be three boxes, one of which contains a prize. Say also that you have looked inside the boxes and know exactly where the prize is. For me, I have some subjective probability P( X_i | I_mine ) that the prize is inside box i. For you, all your subjective probabilities are either zero or one, since you know perfectly well where the prize is. However, if my beliefs about where the prize is follow the probability calculus correctly, you still cannot Dutch book me, even though you know where the prize is and I don’t.
So, how is the scenario about the digits of pi different to this? Do you have some example of an actual Dutch book that I would accept if I were to allow logical uncertainty?
edit:
Ok well I thought of what seems to be a typical Dutch book scenario, but it has made me yet more confused about what is special about the logical uncertainty case. So, let me present two scenarios, and I wonder if you can tell me what the difference is:
Consider two propositions, A and B. Let it be the case that A->B. However, say that we do not realise this, and say we assign the following probabilities to A and B:
P(A) = 0.5
P(B) = 0.5
P(B|A) = P(B)
P(A & B) = 0.25
indicating that we think A and B are independent. Based on these probabilities, we should accept the following arrangement of bets:
Sell bet for $0.50 that A is false, payoff $1 if correct
Sell bet for $0.25 that A & B are both true, payoff $1 if correct
The expected amount we must pay out is 0.5$1 + 0.25$1 = $0.75, which is how much we are selling the bets for, so everything seems fair to us.
Someone who understands that A->B will happily buy these bets from us, since they know that “not A” and “A & B” are actually equivalent to “not A” and “A”, i.e. he knows P(not A) + P(A & B) = 1, so he wins $1 from us no matter what is the case, making a profit of $0.25. So that seems to show that we are being incoherent if we don’t know that A->B.
But now consider the following scenario; instead of having the logical relation that A->B, say that our opponent just has some extra empirical information D that we do not, so that for him P(B|A,D) = 1. For him, then, he would still say that
P(not A | D) + P(A & B | D) = P(not A | D) + P(B|A,D)*P(A|D) = P(not A|D) + P(A|D) = 1
so that we, who do not know D, could still be screwed by the same kind of trade as in the first example. But then, this is sort of obviously possible, since having more information than your opponent should give you a betting advantage. But both situations seem equivalently bad for us, so why are we being incoherent in the first example, but not in the second? Or am I still missing something?