Suppose nature is showing you true sentences one at a time. Model them as drawn randomly from a fixed distribution μ(S), but enforcing propositional consistency.
Does this mean nature has to in fact be showing me sentences sampled from this fixed distribution, or am I just pretending that that’s what it’s doing when I update my prior?
Does this work when sentences are shown to me in an adversarial order?
You’re pretending that it’s what nature is doing what you update your prior. It works when sentences are shown to you in an adversarial order, but there’s the weird aspect that this prior expects the sentences to go back to being drawn from some fixed distribution afterwards. It doesn’t do a thing where it goes “ah, I’m seeing a bunch of blue blocks selectively revealed, even though I think there’s a bunch of red blocks, the next block I’ll have revealed will probably be blue”. Instead, it just sticks with its prior on red and blue blocks.
Does this mean nature has to in fact be showing me sentences sampled from this fixed distribution, or am I just pretending that that’s what it’s doing when I update my prior?
Does this work when sentences are shown to me in an adversarial order?
You’re pretending that it’s what nature is doing what you update your prior. It works when sentences are shown to you in an adversarial order, but there’s the weird aspect that this prior expects the sentences to go back to being drawn from some fixed distribution afterwards. It doesn’t do a thing where it goes “ah, I’m seeing a bunch of blue blocks selectively revealed, even though I think there’s a bunch of red blocks, the next block I’ll have revealed will probably be blue”. Instead, it just sticks with its prior on red and blue blocks.