It’s actually not too hard to demonstrate things about the limit for Abram’s original proposal, unless there’s another one that’s original-er than the one I’m thinking of. It limits to the distribution of outcomes of a certain incomputable random process which uses a halting oracle to tell when certain statements are contradictory.
You are correct that it doesn’t converge to a limit of assigning 1 to true statements and 0 to false statements. This is of course impossible, so we don’t have to accept it. But it seems like we should not have to accept divergence—believing something with high probability, then disbelieving with high probability, then believing again, etc. Or perhaps we should?
Yeah, updating probabilty distributions over models is believed to be good. The problem is, sometimes our probability distributions over models are wrong, as demonstrated by bad behavior when we update on certain info.
The kind of data that would make you want to zeroi out non-90% models. Is when you observe a bunch of random data points and 90% of them are true, but there are no other patterns you can detect.
The other problem is that updates can be hard to compute.