If “Observing nothing carries no information”, then you should not be able to use it to update belief. I agree.
Any belief must be updated based on new information I agree. Observing nothing carries no information, so you don’t use it to update belief.
I would say observing nothing carries the information that the action (sabotage) which your belief predicts to happen did not happen during the observation interval.
Yes, so if you observe no sabotage, then you do update about the existence of a fifth column that would have, with some probability, sabotaged (an infinite possibility). But you don’t update about the existence of the fifth column that doesn’t sabotage, or wouldn’t have sabotaged YET, which are also infinite possibilities. Possibilities aren’t probabilities, and you have no probability of what kind of fifth column you’re dealing with, so you can’t do any Bayesian reasoning. I guess it’s a general failure of Bayesian reasoning. You can’t update 1 confidence beliefs, you can’t update 0 confidence beliefs, and you can’t update undefined beliefs. So, for example, you can’t Bayesian reason about most of the important things in the universe, like whether the sun will rise tomorrow, because you have no idea what causes that’s based on. You have a pretty good model about what might cause the sun to rise tomorrow, but no idea, complete uncertainty (not 0 with certainty nor 1 with certainty, nor 50⁄50 uncertainty, just completely undefined certainty) about what would make the sun NOT rise tomorrow, so you can’t (rationally) Bayesian reason about it. You can bet on it, but you can’t rationally believe about it.
No, you cannot. For things you have no idea, there is no way to (rationally) estimate their probabilities.
No. There are many many things that have “priors” of 1, 0, or undefined. These are undefined. You can’t know anything about their “distribution” because they aren’t distributions. Everything is either true or false, 1 or 0. Probabilities only make sense when talking about human (or more generally, agent) expectations/uncertainties.
That’s not what I mean, and it’s not even what I wrote. I’m not saying “completely”. I said you can’t Bayesian reason about it. I mean you are completely irrational when you even try to Bayesian reason about undefined, 1, or 0 things. What would trying to Bayesian reason about an undefined thing even look like to you?
Do you admit that you have no idea (probability/certainty/confidence-wise) about what might cause the sun not to rise tomorrow? Like is that a good example to you of a completely undefined thing, for which there is no “prior”? It’s one of the best to me, because the sun rising tomorrow is such a cornerstone example for introducing Bayesian reasoning. But to me it’s a perfect example of why Bayesianism is utterly insane. You’re not getting more certain that anything will happen again just because something like it happened before. You can never prove a hypothesis/theory/belief right (because you can’t prove a negative). We can only disprove hypotheses/theories/beliefs. So, with the sun, we have no idea what might cause it to not-rise tomorrow, so we can’t Bayesian ourselves into any sort of “confidence” or “certainty” or “probability” about it. A Bayesian alive but isolated from things that have died would believe itself immortal. This is not rational. Rationality is to just fail to disprove the null hypothesis, not believe ever-stronger in the null just because disconfirming evidence hasn’t yet been encountered.
Back to the blog post, there are cases in which absence of evidence is evidence of absence, but this isn’t it. If you look for something expected by a theory/hypothesis/belief, and you fail to find what it predicts, then that is evidence against it. But, “The 5th column exists” doesn’t predict anything (different from “the 5th column doesn’t exist”), so “the 5th column hasn’t attacked (yet)” isn’t evidence against it.