Something felt off about this example and I think I can put my finger on it now.
My model of the world gives the event with the blue tentacle probability ~0. So when you ask me to imagine it, and I do so, what it feels like to me like I’m coming up with a new model to explain it, which gives a higher probability to that outcome than my current model does. This seems to be the root of the apparent contradiction, it appears that I’m violating the invariant. But I don’t think that that’s what actually happening. Consider this fictional exchange:
EY: Imagine that you have this particular gaussian model. Now suppose that you find yourself in a situation that is 50 SD’s away from the median. How do you explain it?
Me: Well, my hypothesis is that...
EY: Wrong! That scenario is too unlikely, if the model has something to say about, then it must be wrong and irrational.
Me: No! You asked me to suppose this incredibly unlikely scenario, which is exactly what I did. I didn’t conclude “EY is asking me to consider something that’s too unlikely, ah, he’s trying to trick me, therefore I am not going to imagine the scenario on the count that it’s impossible!” because this is an impossible conclusion from inside the model.
I have limited resources, so I just don’t bother pre-computing all details of my model that are too unlikely to matter. But if this scenario actually came up in real life, I would be able to fill in the missing details retroactively. That doesn’t mean that my model assumes more than 100% total probability, because I’m already reserving a bit of probability mass for unknown unknowns. And I needn’t worry about such scenarios now, because they’re too unlikely and there too many similarly unlikely scenarios. I just can’t be meaningfully concerned about them all.
Something felt off about this example and I think I can put my finger on it now.
My model of the world gives the event with the blue tentacle probability ~0. So when you ask me to imagine it, and I do so, what it feels like to me like I’m coming up with a new model to explain it, which gives a higher probability to that outcome than my current model does. This seems to be the root of the apparent contradiction, it appears that I’m violating the invariant. But I don’t think that that’s what actually happening. Consider this fictional exchange:
I have limited resources, so I just don’t bother pre-computing all details of my model that are too unlikely to matter. But if this scenario actually came up in real life, I would be able to fill in the missing details retroactively. That doesn’t mean that my model assumes more than 100% total probability, because I’m already reserving a bit of probability mass for unknown unknowns. And I needn’t worry about such scenarios now, because they’re too unlikely and there too many similarly unlikely scenarios. I just can’t be meaningfully concerned about them all.