So: Occam’s razor—the foundation of science—is also needed.
I was referring to computational issues, not whether a complexity prior is reasonable or not. It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment. I don’t see how Occam’s razor suggests that our world doesn’t look like that (in fact, I currently think that our world does look like that, although my confidence in this is not very high).
It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment.
Well, of course—but that’s learning—which Solomonoff induction models just fine (it is a learning theory).
Or maybe you are suggesting that organisms modify their environment to make their problems simpler. That is perfectly possible—but I don’t really see how it is relevant.
You apparently didn’t disagree with Solomonoff induction allowing the Turing test to be passed. So: what exactly is your beef with its centrality and significance?
It’s possible I misunderstood your original comment. Let me play it back to you in my own words to make sure we’re on the same page.
My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?
P.S. I think that I also disagree with the Solomonoff induction → Turing test proposition; but I’d rather delay discussing that point because I think it is contingent on the others.
My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?
Pretty much. Virtual environments are fine, contain lots of complexity (chaos theory) and have easy access to lots of interesting and difficult problems (game of go, etc). Virtual worlds permit the development of intelligent agents just like the “real” world does. A good job too—since we have no terribly good way of telling whether our world exists under simulation or not.
The Solomonoff induction → Turing test proposition is detailed here.
Sorry for the delayed response, it took me a while to get through the article and corresponding Hutter paper. Do you know of any sources that present the argument for why the Kolmogorov complexity of the universe should be relatively low (i.e. not proportional to the number of atoms), or else why Solomonoff induction would perform well even if the Kolmogorov complexity is high? These both seem intuitively true to me, but I feel uneasy accepting them as fact without a solid argument.
The Kolmogorov complexity of the universe is a totally unknown quantity—AFAIK. Yudkowsky suggests a figure of 500 bits here—but there’s not much in the way of supporting argument.
Solomonoff induction doesn’t depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Instead, consider that Solomonoff induction is a formalisation of Occam’s razor—which is a well-established empirical principle.
I don’t understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.
Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?
It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.
This is all far-future stuff. Why should we worry about it? Aren’t there more pressing issues?
The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
Edited to add: Ah, yes but “the location of what I care about” is has potentially a huge amount of complexity to it.
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...
...but yes, things we see might well have a maximum complexity—associated with the size and complexity of the universe.
When dealing with practical approximations to Solomonoff induction this is “angels and pinheads” material, though. We neither know nor care about such things.
I was referring to computational issues, not whether a complexity prior is reasonable or not. It is possible that making inferences about the environment requires you to solve hard computational problems, and that these problems become easier after additional interaction with the environment. I don’t see how Occam’s razor suggests that our world doesn’t look like that (in fact, I currently think that our world does look like that, although my confidence in this is not very high).
Well, of course—but that’s learning—which Solomonoff induction models just fine (it is a learning theory).
Or maybe you are suggesting that organisms modify their environment to make their problems simpler. That is perfectly possible—but I don’t really see how it is relevant.
You apparently didn’t disagree with Solomonoff induction allowing the Turing test to be passed. So: what exactly is your beef with its centrality and significance?
It’s possible I misunderstood your original comment. Let me play it back to you in my own words to make sure we’re on the same page.
My understanding was that you did not think it would be necessary for an AGI to interact with its environment in order to achieve superhuman intelligence (or perhaps that a limited initial interaction with its environment would be sufficient, after which it could just go off and think). Is that correct, or not?
P.S. I think that I also disagree with the Solomonoff induction → Turing test proposition; but I’d rather delay discussing that point because I think it is contingent on the others.
Pretty much. Virtual environments are fine, contain lots of complexity (chaos theory) and have easy access to lots of interesting and difficult problems (game of go, etc). Virtual worlds permit the development of intelligent agents just like the “real” world does. A good job too—since we have no terribly good way of telling whether our world exists under simulation or not.
The Solomonoff induction → Turing test proposition is detailed here.
Sorry for the delayed response, it took me a while to get through the article and corresponding Hutter paper. Do you know of any sources that present the argument for why the Kolmogorov complexity of the universe should be relatively low (i.e. not proportional to the number of atoms), or else why Solomonoff induction would perform well even if the Kolmogorov complexity is high? These both seem intuitively true to me, but I feel uneasy accepting them as fact without a solid argument.
The Kolmogorov complexity of the universe is a totally unknown quantity—AFAIK. Yudkowsky suggests a figure of 500 bits here—but there’s not much in the way of supporting argument.
Solomonoff induction doesn’t depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Instead, consider that Solomonoff induction is a formalisation of Occam’s razor—which is a well-established empirical principle.
I don’t understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.
Are you thinking of applying Solomonoff induction to the whole universe?!?
If so, that would be a very strange thing to try and do.
Normally you apply Solomonoff induction to some kind of sensory input stream (or a preprocessed abstraction from that stream).
Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?
It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.
This is all far-future stuff. Why should we worry about it? Aren’t there more pressing issues?
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
Edited to add: Ah, yes but “the location of what I care about” is has potentially a huge amount of complexity to it.
As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...
...but yes, things we see might well have a maximum complexity—associated with the size and complexity of the universe.
When dealing with practical approximations to Solomonoff induction this is “angels and pinheads” material, though. We neither know nor care about such things.
Fair enough.