There always is only one correct answer for what outcome from the sample space is actually realised in this particular iteration of the probability experiment.
This doesn’t screw up our update procedure, because probability update represent changes in our knowledge state about which iteration of probability experiment could be this one, not changes in what has actually happened in any particular iteration.
The point is that if you consider all iterations in parallel, you can realize all possible outcomes of the sample space, and assign a probability to each outcome occurring for a Bayesian superintelligence, while in a consistent proof system, not all possible outcomes/statements can be proved, no matter how many iterations are done, and if you could do this, you have proved the logic/theory inconsistent, which is the problem, because for logical uncertainty, there is only 1 possible outcome no matter the amount of iterations for searching for a proof/disproof of a statement (for consistent logics. If not the logic can prove everything)
This is what makes logical uncertainty non-Bayesian, and is why Bayesian reasoning assumes logical omniscience, so this pathological outcome doesn’t happen, but as a consequence, you have basically trivialized learning/intelligence.
The point is that if you consider all iterations in parallel, you can realize all possible outcomes of the sample space
Likewise if I consider every digit of pi in parallel, some of them are odd and some of them are even.
and assign a probability to each outcome occurring for a Bayesian superintelligence
And likewise I can assign probabilities based on how often an unknown to me digit of pi is even or odd. Not sure what does a superintelligence has to do here.
while in a consistent proof system, not all possible outcomes/statements can be proved
The same applies to a coin toss. I can’t prove both “This particular coin toss is Heads” and “This particular coin toss is Tails”, no more than I can simultaneously prove both “This particular digit of pi is odd” and “This particular digit of pi is even”
because for logical uncertainty, there is only 1 possible outcome no matter the amount of iterations
You just need to define you probability experiment more broadly, talking about not a particular digit of pi but a random one, the same way we are doing it for a toss of the coin.
Basically, because it screws with update procedures, since formally speaking, only 1 answer is correct, and quetzal rainbow pointed this out:
https://www.lesswrong.com/posts/H229aGt8nMFQsxJMq/what-s-the-deal-with-logical-uncertainty#yHC8EuR76FE3tnuk6
There always is only one correct answer for what outcome from the sample space is actually realised in this particular iteration of the probability experiment.
This doesn’t screw up our update procedure, because probability update represent changes in our knowledge state about which iteration of probability experiment could be this one, not changes in what has actually happened in any particular iteration.
The point is that if you consider all iterations in parallel, you can realize all possible outcomes of the sample space, and assign a probability to each outcome occurring for a Bayesian superintelligence, while in a consistent proof system, not all possible outcomes/statements can be proved, no matter how many iterations are done, and if you could do this, you have proved the logic/theory inconsistent, which is the problem, because for logical uncertainty, there is only 1 possible outcome no matter the amount of iterations for searching for a proof/disproof of a statement (for consistent logics. If not the logic can prove everything)
This is what makes logical uncertainty non-Bayesian, and is why Bayesian reasoning assumes logical omniscience, so this pathological outcome doesn’t happen, but as a consequence, you have basically trivialized learning/intelligence.
Likewise if I consider every digit of pi in parallel, some of them are odd and some of them are even.
And likewise I can assign probabilities based on how often an unknown to me digit of pi is even or odd. Not sure what does a superintelligence has to do here.
The same applies to a coin toss. I can’t prove both “This particular coin toss is Heads” and “This particular coin toss is Tails”, no more than I can simultaneously prove both “This particular digit of pi is odd” and “This particular digit of pi is even”
You just need to define you probability experiment more broadly, talking about not a particular digit of pi but a random one, the same way we are doing it for a toss of the coin.