I feel like I don’t understand how this model explains the biggest mystery of expereinces sometimes having the reverse impact on your beliefs vs. what they should.
The more technical version of this same story is that habituation requires a perception of safety, but (like every other perception) this one depends on a combination of raw evidence and context. The raw evidence (the Rottweiler sat calmly wagging its tail) looks promising. But the context is a very strong prior that dogs are terrifying. If the prior is strong enough, it overwhelms the real experience. Result: the Rottweiler was terrifying. Any update you make on the situation will be in favor of dogs being terrifying, not against it!
Shouldn’t your experience still be less terrifying than you expected it to be, becuase you’re combining your dogs-are-terrifying-at-level-10 prior with the raw evidence (however constricted that channel is), so your update should still be against dogs being terrifying at level 10 (maybe level 9.9?)?
Maybe the answer is the thing smountjoy said below in response to your caption, that we don’t have gradations in our beliefs about things—dogs are either terrifying or not—and then you have another example of dogs being terrifying to update with. FWIW that sounds unlikley to me—people do seem to tend to have gradations in how evil republicans are or how terrifying dogs are in my experience. Though mabe that gets disabled in these cases, which seems like would explain it.
I feel like I don’t understand how this model explains the biggest mystery of expereinces sometimes having the reverse impact on your beliefs vs. what they should.
Shouldn’t your experience still be less terrifying than you expected it to be, becuase you’re combining your dogs-are-terrifying-at-level-10 prior with the raw evidence (however constricted that channel is), so your update should still be against dogs being terrifying at level 10 (maybe level 9.9?)?
Maybe the answer is the thing smountjoy said below in response to your caption, that we don’t have gradations in our beliefs about things—dogs are either terrifying or not—and then you have another example of dogs being terrifying to update with. FWIW that sounds unlikley to me—people do seem to tend to have gradations in how evil republicans are or how terrifying dogs are in my experience. Though mabe that gets disabled in these cases, which seems like would explain it.
It could also be that the brain uses weights that are greater than 1 when weighting the priors. That way, we don’t lose the gradation.