Your model of the world has been updated! The prior of the variable ‘Monster Near The Academy’ is now 0%.
Priors don’t get updated, posteriors do.
Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable. It looks like you are overupdating.
That’s technically true, though it felt to me like such a common abuse of terminology that it could be allowed to slide. That said, if I just said “the probability of the variable”, that would avoid the problem. (That probability may still be listed as a “prior variable” the next time it’s used in a calculation… but then it’s a prior for that calculation, so that’s probably okay.)
Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable.
That’s true, too. I was thinking that the belief networks aren’t supposed to literally represent the protagonist’s complete set of beliefs about the world, just some set of explicitly-held hypotheses, and she’s still capable of realizing that something that she assigned a 0% probability actually happened. After all, the boy could have been looking in her direction because of something that was neither her response nor a monster, say a beautiful bird… which wasn’t even assigned a 0% probability, it wasn’t represented in the model in the first place. But it’s not like she’d have been incapable of realizing that possibility, had it been pointed out to her—she just didn’t think of it.
While I was reading it I got the impression that it was pointing at common mistakes, not just demonstrating correct behavior—so the protagonist first sets the probability to zero based on naive trust (and because the player is not yet ready to handle an explicit model of the correctness of statements), but this gets corrected later in a realistic way.
If the game made a point of this sort of thing, it would give the (good!) impression that all examples in the game are approximations which need to be refined quite a bit to account for real-life details.
In hindsight, I see it’s not doing this effectively. Perhaps when she finds out the kid was wrong she’s like “Whoops! We just gave a probability of zero to something which then immediately happened!! That’s just about as wrong as you can possibly get. We’d better account for that in our model.” Or, something to that effect.
Priors don’t get updated, posteriors do. Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable. It looks like you are overupdating.
Thanks for the comments!
That’s technically true, though it felt to me like such a common abuse of terminology that it could be allowed to slide. That said, if I just said “the probability of the variable”, that would avoid the problem. (That probability may still be listed as a “prior variable” the next time it’s used in a calculation… but then it’s a prior for that calculation, so that’s probably okay.)
That’s true, too. I was thinking that the belief networks aren’t supposed to literally represent the protagonist’s complete set of beliefs about the world, just some set of explicitly-held hypotheses, and she’s still capable of realizing that something that she assigned a 0% probability actually happened. After all, the boy could have been looking in her direction because of something that was neither her response nor a monster, say a beautiful bird… which wasn’t even assigned a 0% probability, it wasn’t represented in the model in the first place. But it’s not like she’d have been incapable of realizing that possibility, had it been pointed out to her—she just didn’t think of it.
While I was reading it I got the impression that it was pointing at common mistakes, not just demonstrating correct behavior—so the protagonist first sets the probability to zero based on naive trust (and because the player is not yet ready to handle an explicit model of the correctness of statements), but this gets corrected later in a realistic way.
If the game made a point of this sort of thing, it would give the (good!) impression that all examples in the game are approximations which need to be refined quite a bit to account for real-life details.
In hindsight, I see it’s not doing this effectively. Perhaps when she finds out the kid was wrong she’s like “Whoops! We just gave a probability of zero to something which then immediately happened!! That’s just about as wrong as you can possibly get. We’d better account for that in our model.” Or, something to that effect.