Yes, these are certainly good arguments, and valid as far as they go. But I’m not sure they entirely solve the problem.
For concreteness, suppose what the aliens sold us is not a black box, but a sheet of paper with a thousand digit binary number written on it, claimed to be the first thousand bits of Chaitin’s omega (which they claim to have generated using a halting oracle that works by a hack they discovered that allows them to exploit the infinite computation underlying physics). We quickly verified the first few bits, then over the next couple of generations verified a few more bits at decreasing rate.
Omega is algorithmically random, so as far as Solomonoff induction is concerned, no matter how many bits are verified, the probability of the next bit being correct is 50%. On the other hand, we humans assign a higher probability than that, based on a nonzero credence that the whole sequence is correct; and this credence becomes higher the more bits are verified.
It is true that computable humans cannot consistently beat SI in the long run. But does that not just mean in this case that we cannot verify indefinitely many bits of omega? For all the other bits written on that page, there is a fact of the matter regarding whether they are right or wrong. Supposing they are indeed right, do we not still end up holding a more accurate belief than SI, even if we have difficulty translating that into winning bets?
I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations. It just kinda does its own thing, and ends up outperforming humans in some games even though humans may have a richer structure of “beliefs”.
I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations.
A program can use logical theories that reason about abstract ideas that are not at all limited to finite program-like things. In this sense, SI can well favor programs that have beliefs about the world, including arbitrarily abstract beliefs, like beliefs about black-box halting oracles, and not just beliefs about observations.
Fair enough. It seems to me that SI has things it is most reasonable to call beliefs about the external universe, but perhaps this is just a disagreement about intuition and semantics, not about fact; it doesn’t jump out at me that there is a practical way to turn it into a disagreement about predictions.
Yes, these are certainly good arguments, and valid as far as they go. But I’m not sure they entirely solve the problem.
For concreteness, suppose what the aliens sold us is not a black box, but a sheet of paper with a thousand digit binary number written on it, claimed to be the first thousand bits of Chaitin’s omega (which they claim to have generated using a halting oracle that works by a hack they discovered that allows them to exploit the infinite computation underlying physics). We quickly verified the first few bits, then over the next couple of generations verified a few more bits at decreasing rate.
Omega is algorithmically random, so as far as Solomonoff induction is concerned, no matter how many bits are verified, the probability of the next bit being correct is 50%. On the other hand, we humans assign a higher probability than that, based on a nonzero credence that the whole sequence is correct; and this credence becomes higher the more bits are verified.
It is true that computable humans cannot consistently beat SI in the long run. But does that not just mean in this case that we cannot verify indefinitely many bits of omega? For all the other bits written on that page, there is a fact of the matter regarding whether they are right or wrong. Supposing they are indeed right, do we not still end up holding a more accurate belief than SI, even if we have difficulty translating that into winning bets?
I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations. It just kinda does its own thing, and ends up outperforming humans in some games even though humans may have a richer structure of “beliefs”.
A program can use logical theories that reason about abstract ideas that are not at all limited to finite program-like things. In this sense, SI can well favor programs that have beliefs about the world, including arbitrarily abstract beliefs, like beliefs about black-box halting oracles, and not just beliefs about observations.
Fair enough. It seems to me that SI has things it is most reasonable to call beliefs about the external universe, but perhaps this is just a disagreement about intuition and semantics, not about fact; it doesn’t jump out at me that there is a practical way to turn it into a disagreement about predictions.