No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
You haven’t convinced nor shown me that this violates Cox’s theorem.
He showed you. You weren’t paying attention.
Here, the robot is not in the limit of certainty because it cannot compute the required proof.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
such that P(“wet outside”|”rain”)=0.5 until such time as it has obtained a proof that “rain” correlates 100% with “wet outside”.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.
No, you butchered it into a different example. Introduced the Lewis Carroll Paradox, even.
He showed you. You weren’t paying attention.
It can compute the proof. The laws of inference are axioms; P(A|B) is necessarily known a priori.
There is no such time. Either it’s true initially, or it will never be established with certainty. If it’s true initially, that’s because it is an axiom. Which was the whole point.
It does not follow that because someone knows some statements they also know the logical consequences of those statements.
When the someone is an idealized system of logic, it does. And we’re discussing an idealized system of logic here. So it does.
No we aren’t, we’re discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.
The limitation imposed by the bounded resources are the next entry in the sequence. For this, we’re still discussing the unbounded case.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.