The Kolmogorov complexity of the universe is a totally unknown quantity—AFAIK. Yudkowsky suggests a figure of 500 bits here—but there’s not much in the way of supporting argument.
Solomonoff induction doesn’t depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Instead, consider that Solomonoff induction is a formalisation of Occam’s razor—which is a well-established empirical principle.
I don’t understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.
Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?
It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.
This is all far-future stuff. Why should we worry about it? Aren’t there more pressing issues?
The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
Edited to add: Ah, yes but “the location of what I care about” is has potentially a huge amount of complexity to it.
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...
...but yes, things we see might well have a maximum complexity—associated with the size and complexity of the universe.
When dealing with practical approximations to Solomonoff induction this is “angels and pinheads” material, though. We neither know nor care about such things.
The Kolmogorov complexity of the universe is a totally unknown quantity—AFAIK. Yudkowsky suggests a figure of 500 bits here—but there’s not much in the way of supporting argument.
Solomonoff induction doesn’t depend of the Kolmogorov complexity of the universe being low. The idea that Solomonoff induction has something to do with the Kolmogorov complexity of the universe seems very strange to me.
Instead, consider that Solomonoff induction is a formalisation of Occam’s razor—which is a well-established empirical principle.
I don’t understand. I thought the point of Solomonoff induction is that its within an additive constant of being optimal, where the constant depends on the Kolmogorov complexity of the sequence being predicted.
Are you thinking of applying Solomonoff induction to the whole universe?!?
If so, that would be a very strange thing to try and do.
Normally you apply Solomonoff induction to some kind of sensory input stream (or a preprocessed abstraction from that stream).
Sure, but an AGI will presumably eventually observe a large portion of the universe (or at least our light cone), so the K-complexity of its input stream is on par with the K-complexity of the universe, right?
It seems doubtful. In multiverse models, the visible universe is peanuts. Also, the universe might be much larger than the visible universe gets before the universal heat death.
This is all far-future stuff. Why should we worry about it? Aren’t there more pressing issues?
Wouldn’t it put an upper bound on the complexity of any given piece, as you can describe it with “the universe, plus the location of what I care about”?
Edited to add: Ah, yes but “the location of what I care about” is has potentially a huge amount of complexity to it.
As you say, if the multiverse happens to have a small description, the address of an object in the multiverse can still get quite large...
...but yes, things we see might well have a maximum complexity—associated with the size and complexity of the universe.
When dealing with practical approximations to Solomonoff induction this is “angels and pinheads” material, though. We neither know nor care about such things.
Fair enough.