Abram Demski’s system does exactly this if you take his probability distribution and update on the statements “3 is odd”, “5 is odd”, etc. in a Bayesian manner. That’s because his distribution assigns a reasonable probability to statements like “odd numbers are odd”. Updating gives you reasonable updates on evidence.
His distribution also assigns a “reasonable probability” to statements like “the first 3^^^3 odd numbers are ‘odd’, then one isn’t, then they go back to being ‘odd’.” In the low computing power limit, these are assigned very similar probabilities. Thus, if the first 3^^^3 odd numbers are ‘odd’, it’s kind of a toss-up what the next one will be.
Do you disagree? If so, could you use math in explaining why?
What is “the low computing power limit”? If our theories behave badly when you don’t have computing power, that’s unsurprising. Do you mean “the large computing power limit”.
I think probability ( “the first 3^^^3 odd numbers are ‘odd’, then one isn’t, then they go back to being ‘odd’.” ) / probability (“all odd numbers are ‘odd’”) is approximately 2^(length of 3^^^3) in Abram’s system, because the probability of them appearing in the random process is supposed to be this ratio. I don’t see anything about the random process that would make the first one more likely to be contradicted before being stated than the second.
What is “the low computing power limit”? If our theories behave badly when you don’t have computing power, that’s unsurprising. Do you mean “the large computing power limit”.
Nope. The key point is that as computing power becomes lower, Abram’s process allows more and more inconsistent models.
the probability of them appearing in the random process is supposed to be this ratio
The probability of a statement appearing first in the model-generating process is not equal to the probability that it’s modeled by the end.
Abram Demski’s system does exactly this if you take his probability distribution and update on the statements “3 is odd”, “5 is odd”, etc. in a Bayesian manner. That’s because his distribution assigns a reasonable probability to statements like “odd numbers are odd”. Updating gives you reasonable updates on evidence.
His distribution also assigns a “reasonable probability” to statements like “the first 3^^^3 odd numbers are ‘odd’, then one isn’t, then they go back to being ‘odd’.” In the low computing power limit, these are assigned very similar probabilities. Thus, if the first 3^^^3 odd numbers are ‘odd’, it’s kind of a toss-up what the next one will be.
Do you disagree? If so, could you use math in explaining why?
What is “the low computing power limit”? If our theories behave badly when you don’t have computing power, that’s unsurprising. Do you mean “the large computing power limit”.
I think probability ( “the first 3^^^3 odd numbers are ‘odd’, then one isn’t, then they go back to being ‘odd’.” ) / probability (“all odd numbers are ‘odd’”) is approximately 2^(length of 3^^^3) in Abram’s system, because the probability of them appearing in the random process is supposed to be this ratio. I don’t see anything about the random process that would make the first one more likely to be contradicted before being stated than the second.
Nope. The key point is that as computing power becomes lower, Abram’s process allows more and more inconsistent models.
The probability of a statement appearing first in the model-generating process is not equal to the probability that it’s modeled by the end.
So does every process.
True. But for two very strong statements that contradict each other, there’s a close relationship.