I’m talking about Levin-Kolmogorov complexity. The LK complexity of x is defined to be the minimum over all programs p producing x of log(T(p)) + L(p) where T(p) is the execution time of p and L(p) is the length of p.
Sorry for getting that one wrong (I can only say that it’s an unfortunately confusing name).
Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.
I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.
Unless we’re dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time component of KL complexity is going to be negligible compared to the space component.
Even in the case of search problems, this holds. Sure it takes 2^100 years to solve a huge 3-SAT problem, but that contribution of ~160 time-bits pales in comparison to the several kilobytes of space-bits you needed when encoding the input into the program.
Or suppose we’re looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI’s bootstrap code. Basically, you replace “run this” with “return this” at the start of the program and suddenly AGI-finding’s KL complexity is just its K complexity. (see also: the P=NP algorithm that brute force finds programs that work, and so only “works” if P=NP)
I think what I’m getting at is: just use length plus running time, without the free logarithm. That will correctly capture the difficulty of search, instead of making it negligible compared to specifying the input.
Plus, after you move to non-logarithmed time complexity, you can more appropriately appeal to things like the no free lunch theorem and NP-completeness as weak justification for expecting AGI to be hard.
The complexity I’m referring to is the complexity of the AGI code i.e. the length + log-running time of a hypothetical program producing the AGI code.
Or suppose we’re looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI’s bootstrap code.
This wouldn’t quite work since the “AGI” would then be penalized by very long bootstrap. Since the measure of intelligence takes resource constraints into account such an “AGI” wouldn’t be very intelligent: it wouldn’t count as an actual AGI.
Thx for commenting!
I’m talking about Levin-Kolmogorov complexity. The LK complexity of x is defined to be the minimum over all programs p producing x of log(T(p)) + L(p) where T(p) is the execution time of p and L(p) is the length of p.
Sorry for getting that one wrong (I can only say that it’s an unfortunately confusing name).
Your claim is that AGI programs have large min-length-plus-log-of-running-time complexity.
I think you need more justification for this being a useful analogy for how AGI is hard. Clarifications, to avoid mixing notions of problems getting harder as they get longer for any program with notions of single programs taking a lot of space to specify, would also be good.
Unless we’re dealing with things like the Ackermann function or Ramsey numbers, the log-of-running-time component of KL complexity is going to be negligible compared to the space component.
Even in the case of search problems, this holds. Sure it takes 2^100 years to solve a huge 3-SAT problem, but that contribution of ~160 time-bits pales in comparison to the several kilobytes of space-bits you needed when encoding the input into the program.
Or suppose we’re looking at the complexity of programs that find an AGI program. Presumably high, right? Except that the finder can bypass the time cost by pushing the search into the returned AGI’s bootstrap code. Basically, you replace “run this” with “return this” at the start of the program and suddenly AGI-finding’s KL complexity is just its K complexity. (see also: the P=NP algorithm that brute force finds programs that work, and so only “works” if P=NP)
I think what I’m getting at is: just use length plus running time, without the free logarithm. That will correctly capture the difficulty of search, instead of making it negligible compared to specifying the input.
Plus, after you move to non-logarithmed time complexity, you can more appropriately appeal to things like the no free lunch theorem and NP-completeness as weak justification for expecting AGI to be hard.
The complexity I’m referring to is the complexity of the AGI code i.e. the length + log-running time of a hypothetical program producing the AGI code.
This wouldn’t quite work since the “AGI” would then be penalized by very long bootstrap. Since the measure of intelligence takes resource constraints into account such an “AGI” wouldn’t be very intelligent: it wouldn’t count as an actual AGI.