The second claim was actually my main goal with this post. It is a claim I have heard honest arguments against, and even argued against myself, back in the day. A simple but not-particularly-useful version of the argument would be something like “the shortest program which describes biological behavior may be very long”, i.e. high Kolmogorov complexity. If that program were too long to fit in a human brain, then it would be impossible for humans to “understand” the system, in some sense. We could fit the program in a long book, maybe, but since the program itself would be incompressible it would just look like thousands of pages of random noise—indeed, it would be random noise, in a very rigorous sense.
That said, while I don’t think either Alon or I were making claims about what particular structure we’re likely to find here, I do think there is a particular kind of structure here. I do not currently know what that structure is, but I think answering that question (or any of several equivalent questions, e.g. formalizing abstraction) is the main problem required to solve embedded agency and AGI in general.
Also see my response to Ofer, which discusses the same issues from a different starting point.
The second claim was actually my main goal with this post. It is a claim I have heard honest arguments against, and even argued against myself, back in the day. A simple but not-particularly-useful version of the argument would be something like “the shortest program which describes biological behavior may be very long”, i.e. high Kolmogorov complexity. If that program were too long to fit in a human brain, then it would be impossible for humans to “understand” the system, in some sense. We could fit the program in a long book, maybe, but since the program itself would be incompressible it would just look like thousands of pages of random noise—indeed, it would be random noise, in a very rigorous sense.
That said, while I don’t think either Alon or I were making claims about what particular structure we’re likely to find here, I do think there is a particular kind of structure here. I do not currently know what that structure is, but I think answering that question (or any of several equivalent questions, e.g. formalizing abstraction) is the main problem required to solve embedded agency and AGI in general.
Also see my response to Ofer, which discusses the same issues from a different starting point.