Because you didn’t get actually eaten by the other 99 dangerous dogs, just in a situation where you concluded you could have been killed or severely injured had things gone differently. A “near miss”. So you have 99 “new misses”. And from those near misses, there are common behaviors—maybe all the maneating dogs wagged their tails also. So you generate the conclusion that this (actually friendly) dog is just a moment from eating you, therefore it falls in the class of ‘near misses’, therefore +1 encounters.
You have another issue that your ‘monkey brain’ can’t really afford to store every encounter as a separate bin. It is compressing.
It’s a bit more complex than that and depends on neural architecture details we don’t know yet, but I suspect we can and will accidentally make AI systems with trapped priors.
Because you didn’t get actually eaten by the other 99 dangerous dogs, just in a situation where you concluded you could have been killed or severely injured had things gone differently. A “near miss”. So you have 99 “new misses”. And from those near misses, there are common behaviors—maybe all the maneating dogs wagged their tails also. So you generate the conclusion that this (actually friendly) dog is just a moment from eating you, therefore it falls in the class of ‘near misses’, therefore +1 encounters.
You have another issue that your ‘monkey brain’ can’t really afford to store every encounter as a separate bin. It is compressing.
It’s a bit more complex than that and depends on neural architecture details we don’t know yet, but I suspect we can and will accidentally make AI systems with trapped priors.