Yeah, I think that’s also a correct way of looking at it. However, I also think “hypotheses as reasoning methods” is a bit more intuitive.
When trying to predict what someone will say, it is hard to think “okay, what are the simplest models of the entire universe that have had decent predictive performance so far, and what do they predict now?”. Easier is “okay, what are the simplest ways to make predictions that have had decent predictive performance so far, and what do they predict now?”. (One such way to reason is with a model of the entire universe, so we don’t lose any generality this way.)
For example, if someone else is predicting things better than me, I should try to understand why. And you can vaguely understand this process in terms of Solomonoff induction. For example, it gives you a precise way to reason about whether you should copy the reasoning of people who win the lottery.
Paul Christiano speculated that the universal prior is in fact mostly just intelligences doing reasoning. Making an intelligence is simple after all: set up a simple cellular automata that tends to develop lifeforms, wait 3^^^^3 years, and then look around. (See What does the universal prior actually look like? or the exposition at The Solomonoff Prior is Malign.)
Yeah, I think that’s also a correct way of looking at it. However, I also think “hypotheses as reasoning methods” is a bit more intuitive
SIs don’t engage in a wide variety of types of reasoning, it’s all variations on the same theme.
SI is limited compared to humans. It can’t include itself in a model, it can’t contemplate a non-turing-computable world …. and in many ways it’s limited to instrumentalism, to predicting the next observation. A human can state “suppose the world is non computable”—how can that be expressed as a programme? Humans, despite being finite, can do all those things. An SI can test an infinite number of (instrumental) hypotheses, but they are all of the same type: It’s important not to confuse “infinite” and “every”: the set of multiples of 23 is infinite, but does not contain every number
And you can vaguely understand this process in terms of Solomonoff induction
SI isn’t theoretically useful as a way of understanding human thought. Humans can’t brute-force search every possible hypothesis , and must be doing something more sophisticated instead to come up with good hypotheses.
But we are talking about SI. An SI isn’t making English statements. What is true of a GPT is not necessarily true of an SI.
The instructions in a programme executed by an SI have semantics related to programme operations, but not to the outside world, because all machine code does. Machine code instructions do things like “Add 1 to register A”. You would have to look at thousands or millions of such low level instructions to infer what kind of kind high level maths—vector spaces , or non Euclidean geometry—the programme is executing.
And it’s hard to see how you know with certainty that SI is describing an uncomputable or random universe. If it is using limited precision floating point calculations, is that an approximate representation of unlimited precision real number calculations taking place in the territory? Or should it be taken literally? if it uses pseudo-random number generation, does it believe that there is real indeterminism in the territory? Human scientists are also limited in the kind of maths they can use, but again, can communicate verbally what it is supposed to mean, how exact it is, and so on.
Yeah, I think that’s also a correct way of looking at it. However, I also think “hypotheses as reasoning methods” is a bit more intuitive.
When trying to predict what someone will say, it is hard to think “okay, what are the simplest models of the entire universe that have had decent predictive performance so far, and what do they predict now?”. Easier is “okay, what are the simplest ways to make predictions that have had decent predictive performance so far, and what do they predict now?”. (One such way to reason is with a model of the entire universe, so we don’t lose any generality this way.)
For example, if someone else is predicting things better than me, I should try to understand why. And you can vaguely understand this process in terms of Solomonoff induction. For example, it gives you a precise way to reason about whether you should copy the reasoning of people who win the lottery.
Paul Christiano speculated that the universal prior is in fact mostly just intelligences doing reasoning. Making an intelligence is simple after all: set up a simple cellular automata that tends to develop lifeforms, wait 3^^^^3 years, and then look around. (See What does the universal prior actually look like? or the exposition at The Solomonoff Prior is Malign.)
SIs don’t engage in a wide variety of types of reasoning, it’s all variations on the same theme.
SI is limited compared to humans. It can’t include itself in a model, it can’t contemplate a non-turing-computable world …. and in many ways it’s limited to instrumentalism, to predicting the next observation. A human can state “suppose the world is non computable”—how can that be expressed as a programme? Humans, despite being finite, can do all those things. An SI can test an infinite number of (instrumental) hypotheses, but they are all of the same type: It’s important not to confuse “infinite” and “every”: the set of multiples of 23 is infinite, but does not contain every number
SI isn’t theoretically useful as a way of understanding human thought. Humans can’t brute-force search every possible hypothesis , and must be doing something more sophisticated instead to come up with good hypotheses.
The same way a human can? GPT-4 can state “suppose the world is non computable” for example.
But we are talking about SI. An SI isn’t making English statements. What is true of a GPT is not necessarily true of an SI.
The instructions in a programme executed by an SI have semantics related to programme operations, but not to the outside world, because all machine code does. Machine code instructions do things like “Add 1 to register A”. You would have to look at thousands or millions of such low level instructions to infer what kind of kind high level maths—vector spaces , or non Euclidean geometry—the programme is executing.
And it’s hard to see how you know with certainty that SI is describing an uncomputable or random universe. If it is using limited precision floating point calculations, is that an approximate representation of unlimited precision real number calculations taking place in the territory? Or should it be taken literally? if it uses pseudo-random number generation, does it believe that there is real indeterminism in the territory? Human scientists are also limited in the kind of maths they can use, but again, can communicate verbally what it is supposed to mean, how exact it is, and so on.