Ok, nobody is actually a strict, or even particularly careful bayesean reasoner. Still, what probability do you reserve to “my model doesn’t apply, everything I know is wrong”? If you SEE a coin flip come up heads (and examine the coin and perform whatever tests you like), what’s your posterior probability that the coin actually exists and it wasn’t a false memory or trick in some way?
[Question] What epsilon do you subtract from “certainty” in your own probability estimates?
Bayesianism was a mistake.
I don’t.
Every probability estimate I make is implicitly contingent on a whole host of things I can’t fully list or don’t bother to specify because it’s not worth the overhead. This is one of them. Somewhere in my head, implicitly or explicitly, is a world model that includes the rest, including ones I’m not aware of. I do not know if the set of assumptions this implies is finite or not. I know false memories and hallucinations and tricks and so on exist, but unless I already have specific reason to expect such, I reason without keeping track. When I say P(Heads), I actually mean it as shorthand for P(Heads|all the model assumptions needed for “Heads” to make sense as a concept or event). When I find a reason to think one of my unstated model assumptions is wrong, I’m changing which set of conditional probabilities I’m even thinking about.
Over time, as I improve my world model and my level of understanding of my world model, I am more able to explicitly identify my underlying assumptions and choose to question them or question other things in light of them, when I deem it worth the effort, in order to get a little bit closer to a more general underlying probability distribution.
Not enough to make any practical difference to any decision I am going to make. Only when I see the extraordinary evidence required to support an extraordinary hypothesis will it be raised to my attention.