From looking at Conway’s Game of Life, my intuition is that if a universe can support non-ontologically-fundamental Turing machines (I’m invoking anthropic reasoning), then it’s likely to have phenomena analyzable at multiple hierarchical levels (beyond the looser requirement of being simple/compressible).
Basically, if a universe allows any reductionistic understanding at all (that’s what I mean by calling the Turing Machine “non-ontologically-fundamental”), then the reductionist structure is probably a multi-layered one. Either zero reduction layers or lots, but not exactly one layer.
Our universe has a simplifying structure: it abstracts well, implying a particular kind of modularity.
Goal-oriented systems in our universe tend to evolve a modular structure which reflects the structure of the universe.
One major corollary of these two ideas is that goal-oriented systems will tend to evolve similar modular structures, reflecting the relevant parts of their environment. Systems to which this applies include organisms, machine learning algorithms, and the learning performed by the human brain. In particular, this suggests that biological systems and trained deep learning systems are likely to have modular, human-interpretable internal structure. (At least, interpretable by humans familiar with the environment in which the organism/ML system evolved.)
This post talks about some of the evidence behind this model: biological systems are indeed quite modular, and simulated evolution experiments find that circuits do indeed evolve modular structure reflecting the modular structure of environmental variations. The companion post reviews the rest of the book, which makes the case that the internals of biological systems are indeed quite interpretable.
Going forward, this view is in need of a more formal and general model, ideally one which would let us empirically test key predictions—e.g. check the extent to which different systems learn similar features, or whether learned features in neural nets satisfy the expected abstraction conditions, as well as tell us how to look for environment-reflecting structures in evolved/trained systems.
From looking at Conway’s Game of Life, my intuition is that if a universe can support non-ontologically-fundamental Turing machines (I’m invoking anthropic reasoning), then it’s likely to have phenomena analyzable at multiple hierarchical levels (beyond the looser requirement of being simple/compressible).
Basically, if a universe allows any reductionistic understanding at all (that’s what I mean by calling the Turing Machine “non-ontologically-fundamental”), then the reductionist structure is probably a multi-layered one. Either zero reduction layers or lots, but not exactly one layer.
While re-reading things for the 2019 Review, I noticed this is followed up in johnswentworth’s more recent self-review on the Evolution of Modularity: