I also meant that if we make it a good enough tool, maybe it would be valuable to use entirely independent from the game. If that should be a goal, it would need to be carefully designed for. This will likely introduce conflicting requirements though, so may not be worth it.
I’m not sure about conflicting requirements. A bayesnet backend without integrated I/O, with an I/O and GUI made specifically for the game and possibility of reusing or recoding some of the I/O and writing a new GUI for the separate tool seems like it wouldn’t introduce conflicting requirements, modulo code optimization and the increase in design and coding time.
I don’t think it’s worth it though, unless it turns out this kind of modular system is best anyway.
Correlations are entered with a belief as cause and one as effect and values for probabilityOfEffectGivenCause + probabilityOfEffectGivenNotCause
This doesn’t sound like it’ll scale up easily. Correlation maintenance needs to be done manually if new causes are linked to the same effect at runtime, which means the routine that adds a new cause has to know a lot about bayesian updating to do everything properly.
For an extreme example, if the P(Z|¬A1) is .01 for A1 = Person X is Evil, and Z = Murder happens, having in mind the .01 of “someone else not being modeled kills”, and then later you add into the model the 999 other people without properly maintaining each “other cause” probability, you end up with a near-certain murder given that no one is evil.
Or for a simpler example, there are two people, but you don’t know about the other one. P(Z|¬A1) = .1, because P(A1) = P(A2) = .1, and thus P(Z) (base rate) = .19. If you later learn of A2 and add it to the network, you have to know that P(Z|¬A1) = .1 meant “There is still .1 that A2, but we don’t know about A2 yet!”, and subtract this from the (A1 → Z) correlation, otherwise P(Z|¬A1&¬A2) = P(Z) = .19, which is clearly wrong.
Overall, I think we should let the base rates speak for themselves. If P(Z) = .1, P(A1) = .1, and P(A1|Z) = .5, we know there’s enough room in the base rate for A2 at the same rate and weight. Adding a new cause should require checking on base rates and reducing it by the rate/weight of the new cause, and warn or adjust the rate upwards if there’s an excess. Having to check the other correlations seems like way too much trouble.
My preferred approach, however, would be to use odds (and Bayes’ Rule). Perhaps both internally and at the user level.
The “perceived base rate” vs “real base rate” issue keeps nagging me, and we may have to either force the game to upkeep background “true” rates and the player’s beliefs as two separate networks, or use some hack to eliminate the “true” rates and do away with them entirely (e.g. have masked belief nodes for the base rates of other things, with hidden priors invisible to the player).
Anyway, sorry for the long stream-of-consciousness ramble. It was surprisingly hard to externalize this, given the ease I usually have working with bayesian updating.
I’m not sure about conflicting requirements. A bayesnet backend without integrated I/O, with an I/O and GUI made specifically for the game and possibility of reusing or recoding some of the I/O and writing a new GUI for the separate tool seems like it wouldn’t introduce conflicting requirements, modulo code optimization and the increase in design and coding time.
I don’t think it’s worth it though, unless it turns out this kind of modular system is best anyway.
This doesn’t sound like it’ll scale up easily. Correlation maintenance needs to be done manually if new causes are linked to the same effect at runtime, which means the routine that adds a new cause has to know a lot about bayesian updating to do everything properly.
For an extreme example, if the P(Z|¬A1) is .01 for A1 = Person X is Evil, and Z = Murder happens, having in mind the .01 of “someone else not being modeled kills”, and then later you add into the model the 999 other people without properly maintaining each “other cause” probability, you end up with a near-certain murder given that no one is evil.
Or for a simpler example, there are two people, but you don’t know about the other one. P(Z|¬A1) = .1, because P(A1) = P(A2) = .1, and thus P(Z) (base rate) = .19. If you later learn of A2 and add it to the network, you have to know that P(Z|¬A1) = .1 meant “There is still .1 that A2, but we don’t know about A2 yet!”, and subtract this from the (A1 → Z) correlation, otherwise P(Z|¬A1&¬A2) = P(Z) = .19, which is clearly wrong.
Overall, I think we should let the base rates speak for themselves. If P(Z) = .1, P(A1) = .1, and P(A1|Z) = .5, we know there’s enough room in the base rate for A2 at the same rate and weight. Adding a new cause should require checking on base rates and reducing it by the rate/weight of the new cause, and warn or adjust the rate upwards if there’s an excess. Having to check the other correlations seems like way too much trouble.
Might be worth taking a look at how other applications have done it. (two examples).
My preferred approach, however, would be to use odds (and Bayes’ Rule). Perhaps both internally and at the user level.
The “perceived base rate” vs “real base rate” issue keeps nagging me, and we may have to either force the game to upkeep background “true” rates and the player’s beliefs as two separate networks, or use some hack to eliminate the “true” rates and do away with them entirely (e.g. have masked belief nodes for the base rates of other things, with hidden priors invisible to the player).
Anyway, sorry for the long stream-of-consciousness ramble. It was surprisingly hard to externalize this, given the ease I usually have working with bayesian updating.