I think it’s because our minds unconsciously assign substantial weight to a number of hypotheses we’d consciously conclude are silly, along the lines of “what if my friend won the lottery this time”, along with pattern-seeking hardware that makes us place too much weight on fixed-coin hypotheses, etc.
In other words, we’ve subconsciously singled out a small number of outcomes to keep an eye on, despite our conscious belief that these should represent a vanishing fraction of the probability mass. Thus the potential for surprise.
Were we Bayesians instead of Godshatter (and if we somehow had a prior with extremely strong likelihood that this lottery was genuinely fair), then our friend winning might not surprise us.
I think it’s because our minds unconsciously assign substantial weight to a number of hypotheses we’d consciously conclude are silly, along the lines of “what if my friend won the lottery this time”, along with pattern-seeking hardware that makes us place too much weight on fixed-coin hypotheses, etc.
In other words, we’ve subconsciously singled out a small number of outcomes to keep an eye on, despite our conscious belief that these should represent a vanishing fraction of the probability mass. Thus the potential for surprise.
Were we Bayesians instead of Godshatter (and if we somehow had a prior with extremely strong likelihood that this lottery was genuinely fair), then our friend winning might not surprise us.