I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations. It just kinda does its own thing, and ends up outperforming humans in some games even though humans may have a richer structure of “beliefs”.
I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations.
A program can use logical theories that reason about abstract ideas that are not at all limited to finite program-like things. In this sense, SI can well favor programs that have beliefs about the world, including arbitrarily abstract beliefs, like beliefs about black-box halting oracles, and not just beliefs about observations.
Fair enough. It seems to me that SI has things it is most reasonable to call beliefs about the external universe, but perhaps this is just a disagreement about intuition and semantics, not about fact; it doesn’t jump out at me that there is a practical way to turn it into a disagreement about predictions.
I prefer to think that SI doesn’t even have “beliefs” about the external universe, only beliefs about future observations. It just kinda does its own thing, and ends up outperforming humans in some games even though humans may have a richer structure of “beliefs”.
A program can use logical theories that reason about abstract ideas that are not at all limited to finite program-like things. In this sense, SI can well favor programs that have beliefs about the world, including arbitrarily abstract beliefs, like beliefs about black-box halting oracles, and not just beliefs about observations.
Fair enough. It seems to me that SI has things it is most reasonable to call beliefs about the external universe, but perhaps this is just a disagreement about intuition and semantics, not about fact; it doesn’t jump out at me that there is a practical way to turn it into a disagreement about predictions.