The issue is that it only has one tool to change beliefs—Bayesian updating
That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?
Some people say that zero is not a probability :-)
But yes, if you have completely ruled out Z as impossible, you will not consider it any more and it will be discarded forever.
Unless the agent can backtrack and undo the inference chain to fix its mistakes (which is how humans operate and which would be a highly useful feature for a fallible Bayesian agent, in particular one which cannot guarantee that the list of priors it is considering is complete).
That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?
How will it deal with an Outside Context Problem?
It might, but that would be a different design. Not that that’s a bad thing, necessarily, but that’s not what is normally meant by priors.
Priors are a local term. Often enough a prior used to be a posterior during the previous iteration.
But if the probability ever goes to zero, it stays there.
Some people say that zero is not a probability :-)
But yes, if you have completely ruled out Z as impossible, you will not consider it any more and it will be discarded forever.
Unless the agent can backtrack and undo the inference chain to fix its mistakes (which is how humans operate and which would be a highly useful feature for a fallible Bayesian agent, in particular one which cannot guarantee that the list of priors it is considering is complete).