I don’t know why this doesn’t happen in real life, beyond a general sense that whatever weighting function we use isn’t perfectly Bayesian and doesn’t fit in the class I would call “reasonable”. I realize this is a weakness of this model and something that needs further study.
I’ll take a stab at this.
You’ve got a prior, P(dog I meet is dangerous) = 0.99. (Maybe in 99 of your last 100 encounters with dogs, you determined that the dog was dangerous.) You’ve also got a sensation, “dog is wagging its tail,” and some associated conditional probabilities. Your brain combines them correctly according to Bayes’ rule; since your prior is strong, it spits out something like 95% probability that the dog is dangerous.
It then determines that the best thing to do is act afraid. It has no further use for the 95% number—it’s not going to act with 5% less fear, or any less fear, just because there’s a small chance the dog might not eat you. That would be a good way to get eaten. So it attaches the “dangerous” label to the dog and moves on to screaming/hiding/running. (I see alkjash has an idea about rounding that might be similar.)
You go to update your prior. You’ve seen one more dangerous dog, so the strength of your prior increases to 100⁄101.
The mistake your brain makes is updating based on the label (dangerous/not dangerous) instead of the calculated probability of danger. It kind of makes sense that the label would be the more salient node, since it’s the one that determines your course of action.
This explanation isn’t totally convincing (why can’t I just store both the label and the probability? is that too much to ask of my monkey brain?), but it does match what I feel like my brain is doing when it traps itself in a belief.
Because you didn’t get actually eaten by the other 99 dangerous dogs, just in a situation where you concluded you could have been killed or severely injured had things gone differently. A “near miss”. So you have 99 “new misses”. And from those near misses, there are common behaviors—maybe all the maneating dogs wagged their tails also. So you generate the conclusion that this (actually friendly) dog is just a moment from eating you, therefore it falls in the class of ‘near misses’, therefore +1 encounters.
You have another issue that your ‘monkey brain’ can’t really afford to store every encounter as a separate bin. It is compressing.
It’s a bit more complex than that and depends on neural architecture details we don’t know yet, but I suspect we can and will accidentally make AI systems with trapped priors.
I’ll take a stab at this.
You’ve got a prior, P(dog I meet is dangerous) = 0.99. (Maybe in 99 of your last 100 encounters with dogs, you determined that the dog was dangerous.) You’ve also got a sensation, “dog is wagging its tail,” and some associated conditional probabilities. Your brain combines them correctly according to Bayes’ rule; since your prior is strong, it spits out something like 95% probability that the dog is dangerous.
It then determines that the best thing to do is act afraid. It has no further use for the 95% number—it’s not going to act with 5% less fear, or any less fear, just because there’s a small chance the dog might not eat you. That would be a good way to get eaten. So it attaches the “dangerous” label to the dog and moves on to screaming/hiding/running. (I see alkjash has an idea about rounding that might be similar.)
You go to update your prior. You’ve seen one more dangerous dog, so the strength of your prior increases to 100⁄101.
The mistake your brain makes is updating based on the label (dangerous/not dangerous) instead of the calculated probability of danger. It kind of makes sense that the label would be the more salient node, since it’s the one that determines your course of action.
This explanation isn’t totally convincing (why can’t I just store both the label and the probability? is that too much to ask of my monkey brain?), but it does match what I feel like my brain is doing when it traps itself in a belief.
Because you didn’t get actually eaten by the other 99 dangerous dogs, just in a situation where you concluded you could have been killed or severely injured had things gone differently. A “near miss”. So you have 99 “new misses”. And from those near misses, there are common behaviors—maybe all the maneating dogs wagged their tails also. So you generate the conclusion that this (actually friendly) dog is just a moment from eating you, therefore it falls in the class of ‘near misses’, therefore +1 encounters.
You have another issue that your ‘monkey brain’ can’t really afford to store every encounter as a separate bin. It is compressing.
It’s a bit more complex than that and depends on neural architecture details we don’t know yet, but I suspect we can and will accidentally make AI systems with trapped priors.
Doesn’t this model predict people to be way more stupid than reality?
It predicts phobias, partisanship, and stereotypes. It doesn’t predict generalized stupidity.
Maybe you think this model predicts more phobias than we actually see?