Bob says: “There will be an assassination...”
Player’s notebook is automatically filled with this information. The player can assign expected probability.
Bob says: “Alice told me so”
Player’s notebook is automatically filled with this information marked as evidence for the previous claim. The probability assigned to this being true will automatically update the assassination claim.
Or what I was considering yesterday:
Bob says: “There will be an assassination...”
Player manually writes this into his notebook.
Bob says: “Alice told me so”
Player manually writes this into his notebook and manually marks it as evidence for the previous claim. The automatic updating would still happen after this has been done.
Alternatively, the player might just go ahead and write in a conjunction for “Alice told me so” & “Alice knows what she is talking about” & “Alice tells the truth” instead.
Pro: Learning to extract facts from statements seems like a useful skill to teach.
Con: Without letting the game know about the intended meaning of the facts, it would be very hard for it to find and correct faulty reasoning. It might also turn into too much bookkeeping for the player.
I’m leaning more to a middle ground now, were the game presents all facts that are part of a statement, but it is still up to you to connect them to the right place in the graph. We’d have to experiment to find what actually works of course.
I also meant that if we make it a good enough tool, maybe it would be valuable to use entirely independent from the game. If that should be a goal, it would need to be carefully designed for. This will likely introduce conflicting requirements though, so may not be worth it.
I probably won’t finish up something demoable today either. I’ve mostly just been brainstorming on mechanics and the architecture to support them.
Some more random notes from the prototyping:
There are beliefs and correlations between beliefs.
Beliefs are entered with a prior for how likely they are without any of the given correlations (prior).
Correlations are entered with a belief as cause and one as effect and values for probabilityOfEffectGivenCause + probabilityOfEffectGivenNotCause
Conjunctions and disjunctions can be expressed as special cases of beliefs.
The full complexity should not be introduced all at once.
To guide giving probabilities they could be converted to frequencies in time or space. (“So with no evidence, you believe there would be an assasination like this every week?”; “[...]right this hour in one city in the country”)
My biggest problem is that I have no idea how to actually score a player if he gets to come up with his own probabilities in a fictional world. Maybe the game needs to have some way of explicitly finding out the “right” values for some priors and correlations.
Sounds promising! I’ll hopefully have the time to put together a design/prototype of my own tomorrow.
I meant something like the difference between:
Either of those could work, but I’m worried that the steps that the latter option would require would easily make the player feel like she was doing tedious work that could easily have been automated instead. I’m not sure about that, though: getting to enter the data could also feel rewarding. We’ll just have to experiment with it.
My biggest problem is that I have no idea how to actually score a player if he gets to come up with his own probabilities in a fictional world. Maybe the game needs to have some way of explicitly finding out the “right” values for some priors and correlations.
Well, if different beliefs have different consequences in the world (“if you believe the assassin is in the bell tower, go there to stop him”) and the player is scored on his ability to achieve things in the world, that also implicitly scores him on probabilities that are maximally correct / useful. But this might not be explicit enough, if the player has no clue of what the probabilities should be like and feels like they’re just hopelessly flailing around.
I also meant that if we make it a good enough tool, maybe it would be valuable to use entirely independent from the game. If that should be a goal, it would need to be carefully designed for. This will likely introduce conflicting requirements though, so may not be worth it.
I’m not sure about conflicting requirements. A bayesnet backend without integrated I/O, with an I/O and GUI made specifically for the game and possibility of reusing or recoding some of the I/O and writing a new GUI for the separate tool seems like it wouldn’t introduce conflicting requirements, modulo code optimization and the increase in design and coding time.
I don’t think it’s worth it though, unless it turns out this kind of modular system is best anyway.
Correlations are entered with a belief as cause and one as effect and values for probabilityOfEffectGivenCause + probabilityOfEffectGivenNotCause
This doesn’t sound like it’ll scale up easily. Correlation maintenance needs to be done manually if new causes are linked to the same effect at runtime, which means the routine that adds a new cause has to know a lot about bayesian updating to do everything properly.
For an extreme example, if the P(Z|¬A1) is .01 for A1 = Person X is Evil, and Z = Murder happens, having in mind the .01 of “someone else not being modeled kills”, and then later you add into the model the 999 other people without properly maintaining each “other cause” probability, you end up with a near-certain murder given that no one is evil.
Or for a simpler example, there are two people, but you don’t know about the other one. P(Z|¬A1) = .1, because P(A1) = P(A2) = .1, and thus P(Z) (base rate) = .19. If you later learn of A2 and add it to the network, you have to know that P(Z|¬A1) = .1 meant “There is still .1 that A2, but we don’t know about A2 yet!”, and subtract this from the (A1 → Z) correlation, otherwise P(Z|¬A1&¬A2) = P(Z) = .19, which is clearly wrong.
Overall, I think we should let the base rates speak for themselves. If P(Z) = .1, P(A1) = .1, and P(A1|Z) = .5, we know there’s enough room in the base rate for A2 at the same rate and weight. Adding a new cause should require checking on base rates and reducing it by the rate/weight of the new cause, and warn or adjust the rate upwards if there’s an excess. Having to check the other correlations seems like way too much trouble.
My preferred approach, however, would be to use odds (and Bayes’ Rule). Perhaps both internally and at the user level.
The “perceived base rate” vs “real base rate” issue keeps nagging me, and we may have to either force the game to upkeep background “true” rates and the player’s beliefs as two separate networks, or use some hack to eliminate the “true” rates and do away with them entirely (e.g. have masked belief nodes for the base rates of other things, with hidden priors invisible to the player).
Anyway, sorry for the long stream-of-consciousness ramble. It was surprisingly hard to externalize this, given the ease I usually have working with bayesian updating.
I meant something like the difference between:
Bob says: “There will be an assassination...” Player’s notebook is automatically filled with this information. The player can assign expected probability. Bob says: “Alice told me so” Player’s notebook is automatically filled with this information marked as evidence for the previous claim. The probability assigned to this being true will automatically update the assassination claim.
Or what I was considering yesterday:
Bob says: “There will be an assassination...” Player manually writes this into his notebook. Bob says: “Alice told me so” Player manually writes this into his notebook and manually marks it as evidence for the previous claim. The automatic updating would still happen after this has been done. Alternatively, the player might just go ahead and write in a conjunction for “Alice told me so” & “Alice knows what she is talking about” & “Alice tells the truth” instead.
Pro: Learning to extract facts from statements seems like a useful skill to teach. Con: Without letting the game know about the intended meaning of the facts, it would be very hard for it to find and correct faulty reasoning. It might also turn into too much bookkeeping for the player.
I’m leaning more to a middle ground now, were the game presents all facts that are part of a statement, but it is still up to you to connect them to the right place in the graph. We’d have to experiment to find what actually works of course.
I also meant that if we make it a good enough tool, maybe it would be valuable to use entirely independent from the game. If that should be a goal, it would need to be carefully designed for. This will likely introduce conflicting requirements though, so may not be worth it.
I probably won’t finish up something demoable today either. I’ve mostly just been brainstorming on mechanics and the architecture to support them.
Some more random notes from the prototyping:
There are beliefs and correlations between beliefs.
Beliefs are entered with a prior for how likely they are without any of the given correlations (prior).
Correlations are entered with a belief as cause and one as effect and values for probabilityOfEffectGivenCause + probabilityOfEffectGivenNotCause
Conjunctions and disjunctions can be expressed as special cases of beliefs.
The full complexity should not be introduced all at once.
To guide giving probabilities they could be converted to frequencies in time or space. (“So with no evidence, you believe there would be an assasination like this every week?”; “[...]right this hour in one city in the country”)
My biggest problem is that I have no idea how to actually score a player if he gets to come up with his own probabilities in a fictional world. Maybe the game needs to have some way of explicitly finding out the “right” values for some priors and correlations.
Sounds promising! I’ll hopefully have the time to put together a design/prototype of my own tomorrow.
Either of those could work, but I’m worried that the steps that the latter option would require would easily make the player feel like she was doing tedious work that could easily have been automated instead. I’m not sure about that, though: getting to enter the data could also feel rewarding. We’ll just have to experiment with it.
Well, if different beliefs have different consequences in the world (“if you believe the assassin is in the bell tower, go there to stop him”) and the player is scored on his ability to achieve things in the world, that also implicitly scores him on probabilities that are maximally correct / useful. But this might not be explicit enough, if the player has no clue of what the probabilities should be like and feels like they’re just hopelessly flailing around.
I’m not sure about conflicting requirements. A bayesnet backend without integrated I/O, with an I/O and GUI made specifically for the game and possibility of reusing or recoding some of the I/O and writing a new GUI for the separate tool seems like it wouldn’t introduce conflicting requirements, modulo code optimization and the increase in design and coding time.
I don’t think it’s worth it though, unless it turns out this kind of modular system is best anyway.
This doesn’t sound like it’ll scale up easily. Correlation maintenance needs to be done manually if new causes are linked to the same effect at runtime, which means the routine that adds a new cause has to know a lot about bayesian updating to do everything properly.
For an extreme example, if the P(Z|¬A1) is .01 for A1 = Person X is Evil, and Z = Murder happens, having in mind the .01 of “someone else not being modeled kills”, and then later you add into the model the 999 other people without properly maintaining each “other cause” probability, you end up with a near-certain murder given that no one is evil.
Or for a simpler example, there are two people, but you don’t know about the other one. P(Z|¬A1) = .1, because P(A1) = P(A2) = .1, and thus P(Z) (base rate) = .19. If you later learn of A2 and add it to the network, you have to know that P(Z|¬A1) = .1 meant “There is still .1 that A2, but we don’t know about A2 yet!”, and subtract this from the (A1 → Z) correlation, otherwise P(Z|¬A1&¬A2) = P(Z) = .19, which is clearly wrong.
Overall, I think we should let the base rates speak for themselves. If P(Z) = .1, P(A1) = .1, and P(A1|Z) = .5, we know there’s enough room in the base rate for A2 at the same rate and weight. Adding a new cause should require checking on base rates and reducing it by the rate/weight of the new cause, and warn or adjust the rate upwards if there’s an excess. Having to check the other correlations seems like way too much trouble.
Might be worth taking a look at how other applications have done it. (two examples).
My preferred approach, however, would be to use odds (and Bayes’ Rule). Perhaps both internally and at the user level.
The “perceived base rate” vs “real base rate” issue keeps nagging me, and we may have to either force the game to upkeep background “true” rates and the player’s beliefs as two separate networks, or use some hack to eliminate the “true” rates and do away with them entirely (e.g. have masked belief nodes for the base rates of other things, with hidden priors invisible to the player).
Anyway, sorry for the long stream-of-consciousness ramble. It was surprisingly hard to externalize this, given the ease I usually have working with bayesian updating.