Hmm this does not feel the same as what I am suggesting.
Let me map my scenario onto yours:
A = “raining”
B = “wet outside”
A->B = “It will be wet outside if it is raining”
The robot does not know P(“wet outside” | “raining”) = 1. It only knows P(“wet outside” | “raining”, “raining->wet outside”) = 1. It observes that it is raining, so we’ll condition everything on “raining”, taking it as true.
We need some priors. Let P(“wet outside”) = 0.5. We also need a prior for “raining->wet outside”, let that be 0.5 as well. From this it follows that
according to our priors [first and second equalities are the same as in my first post, third equality follow since whether or not it is “raining” is not relevant for figuring out if “raining->wet outside”].
Where the inequality is actually an equality because our prior was P(“wet outside”) = 0.5. Once the proof p that “raining->wet outside” is obtained, we can update this to
In a nutshell: you need three pieces of information to apply this classical chain of reasoning; A, B, and A->B. All three of these propositions should have priors. Then everything seems fine to me. It seems to me you are neglecting the proposition “A->B”, or rather assuming its truth value to be known, when we are explicitly saying that the robot does not know this.
edit: I just realised that I was lucky for my first inequality to work out; I assumed I was free to choose any prior for P(“wet outside”), but it turns out I am not. My priors for “raining” and “raining->wet outside” determine the corresponding prior for “wet outside”, in order to be compatible with the product rule. I just happened to choose the correct one by accident.
It seems to me you are neglecting the proposition “A->B”
Do you know what truth tables are? The statement “A->B” can be represented on a truth table. A and B can be possible. not-A and B can be possible. Not-A and not-B can be possible. But A and not-B is impossible.
A->B and the four statements about the truth table are interchangeable. Even though when I talk about the truth table, I never need to use the “->” symbol. They contain the same content because A->B says that A and not-B is impossible, and saying that A and not-B is impossible says that A->B. For example, “it raining but not being wet outside is impossible.”
In the language of probability, saying that P(B|A)=1 means that A and not-B is impossible, while leaving the other possibilities able to vary freely. The product rule says P(A and not-B) = P(A) * P(not-B | A). What’s P(not-B | A) if P(B | A)=1? It’s zero, because it’s the negation of our assumption.
Writing out things in classical logic doesn’t just mean putting P() around the same symbols. It means making things behave the same way.
‘They contain the same content because A->B says that A and not-B is impossible, and saying that A and not-B is impossible says that A->B. For example, “it raining but not being wet outside is impossible.”’
If you’re talking about standard propositional logic here, without bringing in probabilistic stuff, then this is just wrong or at best very misleadingly put. All ‘A->B’ says is that it is not the case that A and not-B—nothing modal.
Ok sure, so you can go through my reasoning leaving out the implication symbol, but retaining the dependence on the proof “p”, and it all works out the same. The point is only that the robot doesn’t know that A->B, therefore it doesn’t set P(B|A)=1 either.
You had “Suppose our robot knows that P(wet outside | raining) = 1. And it observes that it’s raining, so P(rain)=1. But it’s having trouble figuring out whether it’s wet outside within its time limit, so it just gives up and says P(wet outside)=0.5. Has it violated the product rule? Yes. P(wet outside) >= P(wet outside and raining) = P(wet outside | rain) * P(rain) = 1.”
But you say it is doing P(wet outside)=0.5 as an approximation. This isn’t true though, because it knows that it is raining, so it is setting P(wet outside|rain) = 0.5, which was the crux of my calculation anyway. Therefore when it calculates P(wet outside and raining) = P(wet outside | rain) * P(rain) it gets the answer 0.5, not 1, so it is still being consistent.
You haven’t been very specific about what you think I’m doing incorrectly so it is kind of hard to figure out what you are objecting to. I corrected your example to what I think it should be so that it satisfies the product rule; where’s the problem? How do you propose that the robot can possibly set P(“wet outside”|”rain”)=1 when it can’t do the calculation?
In your example, it can’t. Because the axioms you picked do not determine the answer. Because you are incorrectly translating classical logic into probabilistic logic. And then, as one would expect, your translation of classical logic doesn’t reproduce classical logic.
It was your example, not mine. But you made the contradictory postulate that P(“wet outside”|”rain”)=1 follows from the robots prior knowledge and the probability axioms, and simultaneously that the robot was unable to compute this. To correct this I alter the robots probabilities such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”. Of course the axioms don’t determine this; it is part of the robots prior, which is not determined by any axioms.
You haven’t convinced nor shown me that this violates Cox’s theorem. I admit I have not tried to follow the proof of this theorem myself, but my understanding was that the requirement you speak of is that the probabilistic logic reproduces classical logic in the limit of certainty. Here, the robot is not in the limit of certainty because it cannot compute the required proof. So we should not expect to get the classical logic until updating on the proof and achieving said certainty.
No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
You haven’t convinced nor shown me that this violates Cox’s theorem.
He showed you. You weren’t paying attention.
Here, the robot is not in the limit of certainty because it cannot compute the required proof.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.
Hmm this does not feel the same as what I am suggesting.
Let me map my scenario onto yours:
A = “raining”
B = “wet outside”
A->B = “It will be wet outside if it is raining”
The robot does not know P(“wet outside” | “raining”) = 1. It only knows P(“wet outside” | “raining”, “raining->wet outside”) = 1. It observes that it is raining, so we’ll condition everything on “raining”, taking it as true.
We need some priors. Let P(“wet outside”) = 0.5. We also need a prior for “raining->wet outside”, let that be 0.5 as well. From this it follows that
P(“wet outside” | “raining”) = P(“wet outside” | “raining”, “raining->wet outside”) P(“raining->wet outside”|”raining”) + P(“wet outside” | “raining”, not “raining->wet outside”) P(not “raining->wet outside”|”raining”) = P(“raining->wet outside”|”raining”) = P(“raining->wet outside”) = 0.5
according to our priors [first and second equalities are the same as in my first post, third equality follow since whether or not it is “raining” is not relevant for figuring out if “raining->wet outside”].
So the product rule is not violated.
P(“wet outside”) >= P(“wet outside” and “raining”) = P(“wet outside” | “raining”) P(“raining”) = 0.5
Where the inequality is actually an equality because our prior was P(“wet outside”) = 0.5. Once the proof p that “raining->wet outside” is obtained, we can update this to
P(“wet outside” | p) >= P(“wet outside” and “raining” | p) = P(“wet outside” | “raining”, p) P(“raining” | p) = 1
But there is still no product rule violation because
P(“wet outside” | p) = P(“wet outside” | “raining”, p) P(“raining” | p) + P(“wet outside” | not “raining”, p) P(not “raining” | p) = P(“wet outside” | “raining”, p) P(“raining” | p) = 1.
In a nutshell: you need three pieces of information to apply this classical chain of reasoning; A, B, and A->B. All three of these propositions should have priors. Then everything seems fine to me. It seems to me you are neglecting the proposition “A->B”, or rather assuming its truth value to be known, when we are explicitly saying that the robot does not know this.
edit: I just realised that I was lucky for my first inequality to work out; I assumed I was free to choose any prior for P(“wet outside”), but it turns out I am not. My priors for “raining” and “raining->wet outside” determine the corresponding prior for “wet outside”, in order to be compatible with the product rule. I just happened to choose the correct one by accident.
Do you know what truth tables are? The statement “A->B” can be represented on a truth table. A and B can be possible. not-A and B can be possible. Not-A and not-B can be possible. But A and not-B is impossible.
A->B and the four statements about the truth table are interchangeable. Even though when I talk about the truth table, I never need to use the “->” symbol. They contain the same content because A->B says that A and not-B is impossible, and saying that A and not-B is impossible says that A->B. For example, “it raining but not being wet outside is impossible.”
In the language of probability, saying that P(B|A)=1 means that A and not-B is impossible, while leaving the other possibilities able to vary freely. The product rule says P(A and not-B) = P(A) * P(not-B | A). What’s P(not-B | A) if P(B | A)=1? It’s zero, because it’s the negation of our assumption.
Writing out things in classical logic doesn’t just mean putting P() around the same symbols. It means making things behave the same way.
‘They contain the same content because A->B says that A and not-B is impossible, and saying that A and not-B is impossible says that A->B. For example, “it raining but not being wet outside is impossible.”’
If you’re talking about standard propositional logic here, without bringing in probabilistic stuff, then this is just wrong or at best very misleadingly put. All ‘A->B’ says is that it is not the case that A and not-B—nothing modal.
Ok sure, so you can go through my reasoning leaving out the implication symbol, but retaining the dependence on the proof “p”, and it all works out the same. The point is only that the robot doesn’t know that A->B, therefore it doesn’t set P(B|A)=1 either.
You had “Suppose our robot knows that P(wet outside | raining) = 1. And it observes that it’s raining, so P(rain)=1. But it’s having trouble figuring out whether it’s wet outside within its time limit, so it just gives up and says P(wet outside)=0.5. Has it violated the product rule? Yes. P(wet outside) >= P(wet outside and raining) = P(wet outside | rain) * P(rain) = 1.”
But you say it is doing P(wet outside)=0.5 as an approximation. This isn’t true though, because it knows that it is raining, so it is setting P(wet outside|rain) = 0.5, which was the crux of my calculation anyway. Therefore when it calculates P(wet outside and raining) = P(wet outside | rain) * P(rain) it gets the answer 0.5, not 1, so it is still being consistent.
I’m just going to give up and hope you figure it on your own.
You haven’t been very specific about what you think I’m doing incorrectly so it is kind of hard to figure out what you are objecting to. I corrected your example to what I think it should be so that it satisfies the product rule; where’s the problem? How do you propose that the robot can possibly set P(“wet outside”|”rain”)=1 when it can’t do the calculation?
In your example, it can’t. Because the axioms you picked do not determine the answer. Because you are incorrectly translating classical logic into probabilistic logic. And then, as one would expect, your translation of classical logic doesn’t reproduce classical logic.
It was your example, not mine. But you made the contradictory postulate that P(“wet outside”|”rain”)=1 follows from the robots prior knowledge and the probability axioms, and simultaneously that the robot was unable to compute this. To correct this I alter the robots probabilities such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”. Of course the axioms don’t determine this; it is part of the robots prior, which is not determined by any axioms.
You haven’t convinced nor shown me that this violates Cox’s theorem. I admit I have not tried to follow the proof of this theorem myself, but my understanding was that the requirement you speak of is that the probabilistic logic reproduces classical logic in the limit of certainty. Here, the robot is not in the limit of certainty because it cannot compute the required proof. So we should not expect to get the classical logic until updating on the proof and achieving said certainty.
No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
He showed you. You weren’t paying attention.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
It does not follow that because someone knows some statements they also know the logical consequences of those statements.
When the someone is an idealized system of logic, it does. And we’re discussing an idealized system of logic here. So it does.
No we aren’t, we’re discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.
The limitation imposed by the bounded resources are the next entry in the sequence. For this, we’re still discussing the unbounded case.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.