I thought about your argument a bit and I think I understand it better now. Let’s unpack it.
First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent’s belief is correct. So no need to be outside the world to define “correctness”.
Another matter is verifying the correctness of beliefs if you’re within the world. You seem to argue that a verifier can’t trust its own conclusion if it knows itself to be a deterministic program. This is debatable—it depends on how you define “trust”—but let’s provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn’t follow. Could you explain?
I thought about your argument a bit and I think I understand it better now. Let’s unpack it.
First off, if a deterministic world contains a (deterministic) agent that believes the world is deterministic, that agent’s belief is correct. So no need to be outside the world to define “correctness”.
Another matter is verifying the correctness of beliefs if you’re within the world. You seem to argue that a verifier can’t trust its own conclusion if it knows itself to be a deterministic program. This is debatable—it depends on how you define “trust”—but let’s provisionally accept this. From this you somehow conclude that the world and your mind must be in fact non-deterministic. To me this doesn’t follow. Could you explain?