Maybe I’m missing some nuances here, but couldn’t we just say we’re surprised when “we are presented with a new degree of freedom in a (stochastic or deterministic) model we previously had about the world”?
Take the 52 cards example. We expect to see a number and a suit on the card. If we’re then presented with a card showing an unfamiliar number or suit (or any other gibberish, then our model has been falsified). There was one more possible outcome (degree of freedom) that we were previously unaware of.
I think it’s because our minds unconsciously assign substantial weight to a number of hypotheses we’d consciously conclude are silly, along the lines of “what if my friend won the lottery this time”, along with pattern-seeking hardware that makes us place too much weight on fixed-coin hypotheses, etc.
In other words, we’ve subconsciously singled out a small number of outcomes to keep an eye on, despite our conscious belief that these should represent a vanishing fraction of the probability mass. Thus the potential for surprise.
Were we Bayesians instead of Godshatter (and if we somehow had a prior with extremely strong likelihood that this lottery was genuinely fair), then our friend winning might not surprise us.
Maybe I’m missing some nuances here, but couldn’t we just say we’re surprised when “we are presented with a new degree of freedom in a (stochastic or deterministic) model we previously had about the world”?
Take the 52 cards example. We expect to see a number and a suit on the card. If we’re then presented with a card showing an unfamiliar number or suit (or any other gibberish, then our model has been falsified). There was one more possible outcome (degree of freedom) that we were previously unaware of.
Then why should we be surprised when a friend wins the lottery?
I think it’s because our minds unconsciously assign substantial weight to a number of hypotheses we’d consciously conclude are silly, along the lines of “what if my friend won the lottery this time”, along with pattern-seeking hardware that makes us place too much weight on fixed-coin hypotheses, etc.
In other words, we’ve subconsciously singled out a small number of outcomes to keep an eye on, despite our conscious belief that these should represent a vanishing fraction of the probability mass. Thus the potential for surprise.
Were we Bayesians instead of Godshatter (and if we somehow had a prior with extremely strong likelihood that this lottery was genuinely fair), then our friend winning might not surprise us.