Evolution of Modularity
This post is based on chapter 15 of Uri Alon’s book An Introduction to Systems Biology: Design Principles of Biological Circuits. See the book for more details and citations; see here for a review of most of the rest of the book.
Fun fact: biological systems are highly modular, at multiple different scales. This can be quantified and verified statistically, e.g. by mapping out protein networks and algorithmically partitioning them into parts, then comparing the connectivity of the parts. It can also be seen more qualitatively in everyday biological work: proteins have subunits which retain their function when fused to other proteins, receptor circuits can be swapped out to make bacteria follow different chemical gradients, manipulating specific genes can turn a fly’s antennae into legs, organs perform specific functions, etc, etc.
On the other hand, systems designed by genetic algorithms (aka simulated evolution) are decidedly not modular. This can also be quantified and verified statistically. Qualitatively, examining the outputs of genetic algorithms confirms the statistics: they’re a mess.
So: what is the difference between real-world biological evolution vs typical genetic algorithms, which leads one to produce modular designs and the other to produce non-modular designs?
Kashtan & Alon tackle the problem by evolving logic circuits under various conditions. They confirm that simply optimizing the circuit to compute a particular function, with random inputs used for selection, results in highly non-modular circuits. However, they are able to obtain modular circuits using “modularly varying goals” (MVG).
The idea is to change the reward function every so often (the authors switch it out every 20 generations). Of course, if we just use completely random reward functions, then evolution doesn’t learn anything. Instead, we use “modularly varying” goal functions: we only swap one or two little pieces in the (modular) objective function. An example from the book:
The upshot is that our different goal functions generally use similar sub-functions—suggesting that they share sub-goals for evolution to learn. Sure enough, circuits evolved using MVG have modular structure, reflecting the modular structure of the goals.
(Interestingly, MVG also dramatically accelerates evolution—circuits reach a given performance level much faster under MVG than under a fixed goal, despite needing to change behavior every 20 generations. See either the book or the paper for more on that.)
How realistic is MVG as a model for biological evolution? I haven’t seen quantitative evidence, but qualitative evidence is easy to spot. MVG as a theory of biological modularity predicts that highly variable subgoals will result in modular structure, whereas static subgoals will result in a non-modular mess. Alon’s book gives several examples:
Chemotaxis: different bacteria need to pursue/avoid different chemicals, with different computational needs and different speed/energy trade-offs, in various combinations. The result is modularity: separate components for sensing, processing and motion.
Animals need to breathe, eat, move, and reproduce. A new environment might have different food or require different motions, independent of respiration or reproduction—or vice versa. Since these requirements vary more-or-less independently in the environment, animals evolve modular systems to deal with them: digestive tract, lungs, etc.
Ribosomes, as an anti-example: the functional requirements of a ribosome hardly vary at all, so they end up non-modular. They have pieces, but most pieces do not have an obvious distinct function.
To sum it up: modularity in the system evolves to match modularity in the environment.
- (My understanding of) What Everyone in Technical Alignment is Doing and Why by 29 Aug 2022 1:23 UTC; 413 points) (
- The Plan by 10 Dec 2021 23:41 UTC; 260 points) (
- Gears-Level Models are Capital Investments by 22 Nov 2019 22:41 UTC; 175 points) (
- Testing The Natural Abstraction Hypothesis: Project Intro by 6 Apr 2021 21:24 UTC; 168 points) (
- Deep learning models might be secretly (almost) linear by 24 Apr 2023 18:43 UTC; 117 points) (
- 2019 Review: Voting Results! by 1 Feb 2021 3:10 UTC; 99 points) (
- Searching for Search by 28 Nov 2022 15:31 UTC; 94 points) (
- Project Intro: Selection Theorems for Modularity by 4 Apr 2022 12:59 UTC; 73 points) (
- What Selection Theorems Do We Expect/Want? by 1 Oct 2021 16:03 UTC; 67 points) (
- Ten experiments in modularity, which we’d like you to run! by 16 Jun 2022 9:17 UTC; 62 points) (
- SGD’s Bias by 18 May 2021 23:19 UTC; 61 points) (
- «Boundaries/Membranes» and AI safety compilation by 3 May 2023 21:41 UTC; 57 points) (
- Theories of Modularity in the Biological Literature by 4 Apr 2022 12:48 UTC; 51 points) (
- Regularization Causes Modularity Causes Generalization by 1 Jan 2022 23:34 UTC; 50 points) (
- Selection processes for subagents by 30 Jun 2022 23:57 UTC; 36 points) (
- Abstraction, Evolution and Gears by 24 Jun 2020 17:39 UTC; 29 points) (
- No Abstraction Without a Goal by 10 Jan 2022 20:24 UTC; 28 points) (
- 16 Mar 2022 18:43 UTC; 25 points) 's comment on Book Launch: The Engines of Cognition by (
- 8 Feb 2022 5:13 UTC; 21 points) 's comment on Capability Phase Transition Examples by (
- Positive Feedback → Optimization? by 16 Mar 2020 18:48 UTC; 19 points) (
- 27 Dec 2020 15:40 UTC; 17 points) 's comment on Instant stone (just add water!) by (
- Motivations, Natural Selection, and Curriculum Engineering by 16 Dec 2021 1:07 UTC; 16 points) (
- 30 Jan 2020 20:07 UTC; 14 points) 's comment on If brains are computers, what kind of computers are they? (Dennett transcript) by (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:19 UTC; 14 points) (
- What role should evolutionary analogies play in understanding AI takeoff speeds? by 11 Dec 2021 1:16 UTC; 12 points) (EA Forum;
- 2 Oct 2024 17:41 UTC; 11 points) 's comment on Dalcy’s Shortform by (
- 21 Aug 2020 8:50 UTC; 6 points) 's comment on Search versus design by (
- Telic intuitions across the sciences by 22 Oct 2022 21:31 UTC; 4 points) (
- Contrapositive Natural Abstraction—Project Intro by 24 Jun 2024 18:37 UTC; 4 points) (
- 24 Feb 2022 14:18 UTC; 1 point) 's comment on Self-Organised Neural Networks: A simple, natural and efficient way to intelligence by (
- 30 Dec 2020 3:42 UTC; 0 points) 's comment on Review Voting Thread by (
I liked this post for talking about how evolution produces modularity (contrary to what is often said in this community!). This is something I suspected myself but it’s nice to see it explained clearly, with backing evidence.
Coming back to this post, I have some thoughts related to it that connect this more directly to AI Alignment that I want to write up, and that I think make this post more important than I initially thought. Hence nominating it for the review.
I’m curious to hear these thoughts.