You haven’t been very specific about what you think I’m doing incorrectly so it is kind of hard to figure out what you are objecting to. I corrected your example to what I think it should be so that it satisfies the product rule; where’s the problem? How do you propose that the robot can possibly set P(“wet outside”|”rain”)=1 when it can’t do the calculation?
In your example, it can’t. Because the axioms you picked do not determine the answer. Because you are incorrectly translating classical logic into probabilistic logic. And then, as one would expect, your translation of classical logic doesn’t reproduce classical logic.
It was your example, not mine. But you made the contradictory postulate that P(“wet outside”|”rain”)=1 follows from the robots prior knowledge and the probability axioms, and simultaneously that the robot was unable to compute this. To correct this I alter the robots probabilities such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”. Of course the axioms don’t determine this; it is part of the robots prior, which is not determined by any axioms.
You haven’t convinced nor shown me that this violates Cox’s theorem. I admit I have not tried to follow the proof of this theorem myself, but my understanding was that the requirement you speak of is that the probabilistic logic reproduces classical logic in the limit of certainty. Here, the robot is not in the limit of certainty because it cannot compute the required proof. So we should not expect to get the classical logic until updating on the proof and achieving said certainty.
No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
You haven’t convinced nor shown me that this violates Cox’s theorem.
He showed you. You weren’t paying attention.
Here, the robot is not in the limit of certainty because it cannot compute the required proof.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.
I’m just going to give up and hope you figure it on your own.
You haven’t been very specific about what you think I’m doing incorrectly so it is kind of hard to figure out what you are objecting to. I corrected your example to what I think it should be so that it satisfies the product rule; where’s the problem? How do you propose that the robot can possibly set P(“wet outside”|”rain”)=1 when it can’t do the calculation?
In your example, it can’t. Because the axioms you picked do not determine the answer. Because you are incorrectly translating classical logic into probabilistic logic. And then, as one would expect, your translation of classical logic doesn’t reproduce classical logic.
It was your example, not mine. But you made the contradictory postulate that P(“wet outside”|”rain”)=1 follows from the robots prior knowledge and the probability axioms, and simultaneously that the robot was unable to compute this. To correct this I alter the robots probabilities such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”. Of course the axioms don’t determine this; it is part of the robots prior, which is not determined by any axioms.
You haven’t convinced nor shown me that this violates Cox’s theorem. I admit I have not tried to follow the proof of this theorem myself, but my understanding was that the requirement you speak of is that the probabilistic logic reproduces classical logic in the limit of certainty. Here, the robot is not in the limit of certainty because it cannot compute the required proof. So we should not expect to get the classical logic until updating on the proof and achieving said certainty.
No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
He showed you. You weren’t paying attention.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
It does not follow that because someone knows some statements they also know the logical consequences of those statements.
When the someone is an idealized system of logic, it does. And we’re discussing an idealized system of logic here. So it does.
No we aren’t, we’re discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.
The limitation imposed by the bounded resources are the next entry in the sequence. For this, we’re still discussing the unbounded case.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.