Biological systems are not random, in the sense that they have purpose
Biological systems are human-understandable with enough effort
The first one seems to be expected even under the “everything is a mess” model—even though evolution is just randomly trying things, the only things that stick around are the ones that are useful, so you’d expect that most things that appear on first glance to be useless actually do have some purpose.
The second one is the claim I’m most interested in.
Some of your summaries seem to be more about the first claim. For Chapters 7-8:
For our purposes, the main takeaway from these two chapters is that, just because the system looks wasteful/arbitrary, does not mean it is. Once we know what to look for, it becomes clear that the structure of biological systems is not nearly so arbitrary as it looks.
This seems to be entirely the first claim.
The other chapters seem like they do mostly address the second claim, but it’s a bit hard to tell. I’m curious if, now knowing about these two distinct claims, you still think the book is strong evidence for the second claim? What about chapters 7-8 in particular?
I intended the claim to be entirely the second. The first is relevant mainly as a precondition for the second, and as a possible path to human understanding.
Chapters 7-8 are very much aimed at human understanding of the systems used for robust recognition and signal-passing, and I consider both those chapters and the book to be strong evidence that human understanding is tractable.
Regarding the “not random” claim, I’m guessing you’re looking at many of the statistical claims? E.g. things like “only a handful of specific designs work”. Those are obviously evidence of biological systems not being random, but more importantly, they’re evidence that humans can see and understand ways in which biological systems are not random. Not only is there structure, that structure is human-understandable—i.e. repeated biochemical circuit designs.
This second claim sounds to me as being a bit trivial. Perhaps it is my reverse engineering background, but I have always taken it for granted that approximately any mechanism is understandable by a clever human given enough effort.
This book [and your review] explains a number of particular pieces of understanding of biological systems in detail, which is super interesting; but the mere point that these things can be understood with sufficient study almost feels axiomatic. Ignorance is in the map, not the territory; there are no confusing phenomena, only minds confused by phenomena; etc. Even when I knew nothing about this biological machinery, I never imagined for a second that no understanding was attainable in principle. I only saw *systems that are not optimized for ease of understanding*, and therefore presumably more challenging to understand than systems designed by human engineers which *are* optimized for ease of understanding.
But I get the impression that the real point you are shooting for (and possibly, the point the book is shooting for) is a stronger point than this. Not so much “there is understanding to be had here, if you look deeply enough”, but rather a claim about what *particular type of structure* we are likely to find, and how this may or may not conform to the type of structure that humans are trained to look for.
Is this true? If it is, could you expand on this distinction?
The second claim was actually my main goal with this post. It is a claim I have heard honest arguments against, and even argued against myself, back in the day. A simple but not-particularly-useful version of the argument would be something like “the shortest program which describes biological behavior may be very long”, i.e. high Kolmogorov complexity. If that program were too long to fit in a human brain, then it would be impossible for humans to “understand” the system, in some sense. We could fit the program in a long book, maybe, but since the program itself would be incompressible it would just look like thousands of pages of random noise—indeed, it would be random noise, in a very rigorous sense.
That said, while I don’t think either Alon or I were making claims about what particular structure we’re likely to find here, I do think there is a particular kind of structure here. I do not currently know what that structure is, but I think answering that question (or any of several equivalent questions, e.g. formalizing abstraction) is the main problem required to solve embedded agency and AGI in general.
Also see my response to Ofer, which discusses the same issues from a different starting point.
It seems like there are two claims here:
Biological systems are not random, in the sense that they have purpose
Biological systems are human-understandable with enough effort
The first one seems to be expected even under the “everything is a mess” model—even though evolution is just randomly trying things, the only things that stick around are the ones that are useful, so you’d expect that most things that appear on first glance to be useless actually do have some purpose.
The second one is the claim I’m most interested in.
Some of your summaries seem to be more about the first claim. For Chapters 7-8:
This seems to be entirely the first claim.
The other chapters seem like they do mostly address the second claim, but it’s a bit hard to tell. I’m curious if, now knowing about these two distinct claims, you still think the book is strong evidence for the second claim? What about chapters 7-8 in particular?
I intended the claim to be entirely the second. The first is relevant mainly as a precondition for the second, and as a possible path to human understanding.
Chapters 7-8 are very much aimed at human understanding of the systems used for robust recognition and signal-passing, and I consider both those chapters and the book to be strong evidence that human understanding is tractable.
Regarding the “not random” claim, I’m guessing you’re looking at many of the statistical claims? E.g. things like “only a handful of specific designs work”. Those are obviously evidence of biological systems not being random, but more importantly, they’re evidence that humans can see and understand ways in which biological systems are not random. Not only is there structure, that structure is human-understandable—i.e. repeated biochemical circuit designs.
This second claim sounds to me as being a bit trivial. Perhaps it is my reverse engineering background, but I have always taken it for granted that approximately any mechanism is understandable by a clever human given enough effort.
This book [and your review] explains a number of particular pieces of understanding of biological systems in detail, which is super interesting; but the mere point that these things can be understood with sufficient study almost feels axiomatic. Ignorance is in the map, not the territory; there are no confusing phenomena, only minds confused by phenomena; etc. Even when I knew nothing about this biological machinery, I never imagined for a second that no understanding was attainable in principle. I only saw *systems that are not optimized for ease of understanding*, and therefore presumably more challenging to understand than systems designed by human engineers which *are* optimized for ease of understanding.
But I get the impression that the real point you are shooting for (and possibly, the point the book is shooting for) is a stronger point than this. Not so much “there is understanding to be had here, if you look deeply enough”, but rather a claim about what *particular type of structure* we are likely to find, and how this may or may not conform to the type of structure that humans are trained to look for.
Is this true? If it is, could you expand on this distinction?
The second claim was actually my main goal with this post. It is a claim I have heard honest arguments against, and even argued against myself, back in the day. A simple but not-particularly-useful version of the argument would be something like “the shortest program which describes biological behavior may be very long”, i.e. high Kolmogorov complexity. If that program were too long to fit in a human brain, then it would be impossible for humans to “understand” the system, in some sense. We could fit the program in a long book, maybe, but since the program itself would be incompressible it would just look like thousands of pages of random noise—indeed, it would be random noise, in a very rigorous sense.
That said, while I don’t think either Alon or I were making claims about what particular structure we’re likely to find here, I do think there is a particular kind of structure here. I do not currently know what that structure is, but I think answering that question (or any of several equivalent questions, e.g. formalizing abstraction) is the main problem required to solve embedded agency and AGI in general.
Also see my response to Ofer, which discusses the same issues from a different starting point.
Thanks!