Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.
It does not follow that because someone knows some statements they also know the logical consequences of those statements.
When the someone is an idealized system of logic, it does. And we’re discussing an idealized system of logic here. So it does.
No we aren’t, we’re discussing a robot with finite resources. I obviously agree that an omnipotent god of logic can skip these problems.
The limitation imposed by the bounded resources are the next entry in the sequence. For this, we’re still discussing the unbounded case.
Very well, then i will wait for the next entry. But i thought the fact that we were explicitly discussing things the robot could not compute made it clear that resources were limited. There is clearly no such thing as logical uncertainty to the magic logic god of the idealised case.