Is there any problem that might occur from an agent failing to do enough investigation? (Possibly ever, possibly just before taking some action that ends up being important)
It’s when it’s done a moderate amount of investigation that the error is highest. Disbelieving JFK’s assassination makes little difference to most people. If you investigate a little, you start believing in ultra efficient gov conspiracies. If you investigate a lot, you start believing in general miracles. If you do a massive investigation, you start believing in one specific miracle.
Basically there’s a problem when JFK’s assassination is relevant to your prediction, but you don’t have many other relevant samples.
Technically, no—an expected utility maximiser doesn’t even have a self model. But it practice it might behave in wys that really look like it’s questioning its own sanity, I’m not entirely sure,
You’re right, it could, and that’s not even the issue here. The issue is that it only has one tool to change beliefs—Bayesian updating—and that tool has not impact with a prior of zero.
The issue is that it only has one tool to change beliefs—Bayesian updating
That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?
Some people say that zero is not a probability :-)
But yes, if you have completely ruled out Z as impossible, you will not consider it any more and it will be discarded forever.
Unless the agent can backtrack and undo the inference chain to fix its mistakes (which is how humans operate and which would be a highly useful feature for a fallible Bayesian agent, in particular one which cannot guarantee that the list of priors it is considering is complete).
Fascinating.
Is there any problem that might occur from an agent failing to do enough investigation? (Possibly ever, possibly just before taking some action that ends up being important)
It’s when it’s done a moderate amount of investigation that the error is highest. Disbelieving JFK’s assassination makes little difference to most people. If you investigate a little, you start believing in ultra efficient gov conspiracies. If you investigate a lot, you start believing in general miracles. If you do a massive investigation, you start believing in one specific miracle.
Basically there’s a problem when JFK’s assassination is relevant to your prediction, but you don’t have many other relevant samples.
It will never question its own sanity?
Technically, no—an expected utility maximiser doesn’t even have a self model. But it practice it might behave in wys that really look like it’s questioning its own sanity, I’m not entirely sure,
Why not? Is there something that prevents it from having a self model?
You’re right, it could, and that’s not even the issue here. The issue is that it only has one tool to change beliefs—Bayesian updating—and that tool has not impact with a prior of zero.
That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?
How will it deal with an Outside Context Problem?
It might, but that would be a different design. Not that that’s a bad thing, necessarily, but that’s not what is normally meant by priors.
Priors are a local term. Often enough a prior used to be a posterior during the previous iteration.
But if the probability ever goes to zero, it stays there.
Some people say that zero is not a probability :-)
But yes, if you have completely ruled out Z as impossible, you will not consider it any more and it will be discarded forever.
Unless the agent can backtrack and undo the inference chain to fix its mistakes (which is how humans operate and which would be a highly useful feature for a fallible Bayesian agent, in particular one which cannot guarantee that the list of priors it is considering is complete).