the M1-simulator may be long, but its length is completely independent what we’re predicting—thus, the M2-Kolmogorov-complexity of a string is at most the M1-Kolmogorov-complexity plus a constant (where the constant is the length of the M1-simulator program).
I agree with this, but I don’t think it answers the question. (i.e. it’s not a relevant argument^([1]))
Given the English sentence, the simulated human should then be able to predict anything a physical human could predict given the same English sentence.
There’s a large edge case where the overhead constant is ~greater than the program. in those cases it’s not the case that simplicity transitions across layers of abstraction.
That edge case means this doesn’t follow:
Thus, if something has a short English description, then there exists a short (up to a constant) code description
[1]: Edit: it could be relevant but not the whole story; but in that case it’s missing a sizable chunk.
The solution to the “large overhead” problem is to amortize the cost of the human simulation over a large number of English sentences and predictions. We only need to specify the simulation once, and then we can use it for any number of prediction problems in conjunction with any number of sentences. A short English sentence then adds only a small amount of marginal complexity to the program—i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.
The solution to the “large overhead” problem is to amortize the cost of the human simulation over a large number of English sentences and predictions.
That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don’t think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn’t hold across layers of abstraction). That’s why the strategy you mention needs to be over ever larger and larger problem spaces to make sense.
So this would still mean most of our reasoning about Occam’s Razor wouldn’t apply to SI.
A short English sentence then adds only a small amount of marginal complexity to the program—i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.
I’m not sure we (humanity) know enough to claim only a short string needs to be added. I think GPT-3 hints at a counter-example b/c GTP has been growing geometrically.
Moreover, I don’t think we have any programs or ideas for programs that are anywhere near sophisticated enough to answer meaningful Qs—unless they just regurgitate an answer. So we don’t have a good reason to claim to know what we’ll need to add to extend your solution to handle more and more cases (especially increasingly technical/sophisticated cases).
Intuitively I think there is (physically) a way to do something like what you describe efficiently because humans are an example of this—we have no known limit for understanding new ideas. However, it’s not okay to use this as a hypothetical SI program b/c such a program does other stuff we don’t know how to do with SI programs (like taking into account itself, other actors, and the universe broadly).
If the hypothetical program does stuff we don’t understand and we also don’t understand its data encoding methods, then I don’t think we can make claims about how much data we’d need to add.
I think it’s reasonable there would be no upper limit on the amount of data we’d need to add to such a program as we input increasingly sophisticated questions. I also think it’s intuitive there’s no upper limit on this data requirement (for both people and the hypothetical programs you mention).
I agree with this, but I don’t think it answers the question. (i.e. it’s not a relevant argument^([1]))
There’s a large edge case where the overhead constant is ~greater than the program. in those cases it’s not the case that simplicity transitions across layers of abstraction.
That edge case means this doesn’t follow:
[1]: Edit: it could be relevant but not the whole story; but in that case it’s missing a sizable chunk.
The solution to the “large overhead” problem is to amortize the cost of the human simulation over a large number of English sentences and predictions. We only need to specify the simulation once, and then we can use it for any number of prediction problems in conjunction with any number of sentences. A short English sentence then adds only a small amount of marginal complexity to the program—i.e. adding one more sentence (and corresponding predictions) only adds a short string to the program.
That seems a fair approach in general, like how can we use the program efficiently/profitably, but I don’t think it answers the question in OP. I think it actually actually implies the opposite effect: as you go through more layers of abstraction you get more and more complex (i.e. simplicity doesn’t hold across layers of abstraction). That’s why the strategy you mention needs to be over ever larger and larger problem spaces to make sense.
So this would still mean most of our reasoning about Occam’s Razor wouldn’t apply to SI.
I’m not sure we (humanity) know enough to claim only a short string needs to be added. I think GPT-3 hints at a counter-example b/c GTP has been growing geometrically.
Moreover, I don’t think we have any programs or ideas for programs that are anywhere near sophisticated enough to answer meaningful Qs—unless they just regurgitate an answer. So we don’t have a good reason to claim to know what we’ll need to add to extend your solution to handle more and more cases (especially increasingly technical/sophisticated cases).
Intuitively I think there is (physically) a way to do something like what you describe efficiently because humans are an example of this—we have no known limit for understanding new ideas. However, it’s not okay to use this as a hypothetical SI program b/c such a program does other stuff we don’t know how to do with SI programs (like taking into account itself, other actors, and the universe broadly).
If the hypothetical program does stuff we don’t understand and we also don’t understand its data encoding methods, then I don’t think we can make claims about how much data we’d need to add.
I think it’s reasonable there would be no upper limit on the amount of data we’d need to add to such a program as we input increasingly sophisticated questions. I also think it’s intuitive there’s no upper limit on this data requirement (for both people and the hypothetical programs you mention).