Alright, first I’ll summarize what I think are Dennett’s core points, then give some reactions. Dennett’s core points:
There’s a general idea that intelligent systems can be built out of non-intelligent parts, by non-intelligent processes. “Competence without comprehension”—evolution is a non-intelligent process which built intelligent systems, Turing machines are intelligent systems built from non-intelligent parts.
Among these reductively-intelligent systems, there’s a divide between things-like-evolution and things-like-Turing-machines. One operates by heterogenous, bottom-up competitive pressure, the other operates by top-down direction and standardized parts.
People who instinctively react against reductive ideas about human minds (the first bullet) are often really pointing to the distinction in the second bullet—i.e. human minds are more like evolution than like Turing machines.
In particular, the things-which-evolve in the human mind are memes.
The first point is basically pointing at embedded agency. The last point I’ll ignore; there are ways that human minds could be more like evolution than like Turing machines even without dragging memes into the picture, and I find the more general idea more interesting.
So, summarizing Dennett very roughly… we have two models of reductive intelligence: things-like-evolution and things-like-Turing-machines. Human minds are more like evolution.
My reaction:
Evolution and Turing machines are both low-level primitive computational models which tend to instrumentally converge on much more similar high-level models.
In particular, I’ll talk about modularity, although I don’t think that’s the only dimension along which we see convergence—just a particularly intuitive and dramatic dimension.
Biological evolution (and many things-like-evolution) has a lot more structure to it than it first seems—it’s surprisingly modular. On the other side of the equation, Turing machines are a particularly terrible way to represent computation—there’s no modularity unless the programmer adds it. That’s why we usually work with more modular, higher-level abstractions—and those high-level abstractions much more closely resemble the modularity of evolution. In practice, both evolution and Turing machines end up leveraging modularity a lot, and modular-systems-built-on-evolution actually look pretty similar to modular-systems-built-on-Turing-machines.
In more detail...
Here’s how I think most people picture evolution: there’s a random small change, changes are kept if-and-only-if they help, so over time things tend to drift up their incentive gradient. It’s a biased random walk. I’ve heard this model credited to Fisher (the now-infamous frequentist statistician), and it was how most biologists pictured evolution for a very long time.
But with the benefit of modern sequencing and genetic engineering, that picture looks a lot less accurate. A particularly dramatic example: a single mutation can turn a fly’s antennae into legs. Nothing about that suggests small changes or drift along a gradient. Rather, it suggests a very modular architecture, where entire chunks can be copy-pasted into different configurations. If you take a class on evolutionary developmental biology (evo-devo), you’ll see all sorts of stuff like this. (Also, see evolution of modularity for more on why modularity evolves in the first place.)
This also sounds a lot more like how humans “intelligently” design things. We build abstract, modular subsystems which can be composed in a variety of configurations. Large chunks can be moved around or reused without worrying about their internal structure. That’s arguably the single most universal idea in software engineering: modularity is good. That’s why we build more modular, higher-level abstractions on top of our low-level random-access Turing machines.
So even though evolution and Turing machines are very different low-level computational models, we end up with surprisingly similar systems built on top of them.
+1, if someone gives me a gadget I’ve never seen before, and I try to figure out what it is and how to use it, that process feels like I’m building a generative model by mainly gluing together a bunch of pre-existing pieces, not like I’m doing gradient descent. And yeah, it’s occurred to me that this (the human brain approach) is a little bit like evolution. :)
Johnswentworth, I’m interested in your reaction / thoughts on the above. Feels related to a lot of things you’ve been talking about.
Alright, first I’ll summarize what I think are Dennett’s core points, then give some reactions. Dennett’s core points:
There’s a general idea that intelligent systems can be built out of non-intelligent parts, by non-intelligent processes. “Competence without comprehension”—evolution is a non-intelligent process which built intelligent systems, Turing machines are intelligent systems built from non-intelligent parts.
Among these reductively-intelligent systems, there’s a divide between things-like-evolution and things-like-Turing-machines. One operates by heterogenous, bottom-up competitive pressure, the other operates by top-down direction and standardized parts.
People who instinctively react against reductive ideas about human minds (the first bullet) are often really pointing to the distinction in the second bullet—i.e. human minds are more like evolution than like Turing machines.
In particular, the things-which-evolve in the human mind are memes.
The first point is basically pointing at embedded agency. The last point I’ll ignore; there are ways that human minds could be more like evolution than like Turing machines even without dragging memes into the picture, and I find the more general idea more interesting.
So, summarizing Dennett very roughly… we have two models of reductive intelligence: things-like-evolution and things-like-Turing-machines. Human minds are more like evolution.
My reaction:
Evolution and Turing machines are both low-level primitive computational models which tend to instrumentally converge on much more similar high-level models.
In particular, I’ll talk about modularity, although I don’t think that’s the only dimension along which we see convergence—just a particularly intuitive and dramatic dimension.
Biological evolution (and many things-like-evolution) has a lot more structure to it than it first seems—it’s surprisingly modular. On the other side of the equation, Turing machines are a particularly terrible way to represent computation—there’s no modularity unless the programmer adds it. That’s why we usually work with more modular, higher-level abstractions—and those high-level abstractions much more closely resemble the modularity of evolution. In practice, both evolution and Turing machines end up leveraging modularity a lot, and modular-systems-built-on-evolution actually look pretty similar to modular-systems-built-on-Turing-machines.
In more detail...
Here’s how I think most people picture evolution: there’s a random small change, changes are kept if-and-only-if they help, so over time things tend to drift up their incentive gradient. It’s a biased random walk. I’ve heard this model credited to Fisher (the now-infamous frequentist statistician), and it was how most biologists pictured evolution for a very long time.
But with the benefit of modern sequencing and genetic engineering, that picture looks a lot less accurate. A particularly dramatic example: a single mutation can turn a fly’s antennae into legs. Nothing about that suggests small changes or drift along a gradient. Rather, it suggests a very modular architecture, where entire chunks can be copy-pasted into different configurations. If you take a class on evolutionary developmental biology (evo-devo), you’ll see all sorts of stuff like this. (Also, see evolution of modularity for more on why modularity evolves in the first place.)
This also sounds a lot more like how humans “intelligently” design things. We build abstract, modular subsystems which can be composed in a variety of configurations. Large chunks can be moved around or reused without worrying about their internal structure. That’s arguably the single most universal idea in software engineering: modularity is good. That’s why we build more modular, higher-level abstractions on top of our low-level random-access Turing machines.
So even though evolution and Turing machines are very different low-level computational models, we end up with surprisingly similar systems built on top of them.
+1, if someone gives me a gadget I’ve never seen before, and I try to figure out what it is and how to use it, that process feels like I’m building a generative model by mainly gluing together a bunch of pre-existing pieces, not like I’m doing gradient descent. And yeah, it’s occurred to me that this (the human brain approach) is a little bit like evolution. :)