Hebbian Learning Is More Common Than You Think
Epistemic status: locating the hypothesis. I have my private confidence but you shouldn’t take my word for it.
I originally got the idea from this video interview of professor Richard A. Watson where he explains how learning networks could arise naturally and be an important factor in evolution. The meat starts around 15 minutes in.
First, an intuition pump: If you had a suspended network of non-ideal springs, loading the network would slightly change the resting lengths of the springs for the next iteration. In effect, even a spring network has (admittedly limited) learning potential.
Dr. Watson focuses on evolution and makes his strongest case in that domain. In short, ecological stressors impose evolutionary pressures on individuals that cause the ecological relationships between species to change over evolutionary time in a manner consistent with Hebb’s rule. In other words, individual selection powers Hebbian learning on the ecosystem level.
In the interview, he mentions his associate setting up an ecological simulation isomorphic to the rules of sudoku and getting it to perform at what would be considered a very high skill level for a human. I believe this to be the relevant paper. I haven’t read more than the abstract; here’s the most relevant quote:
We demonstrate the capabilities of this process in the ecological model by showing how the action of individual natural selection can enable communities to i) form a distributed ecological memory of multiple past states; ii) improve their ability to resolve conflicting constraints among species leading to higher community biomass; and iii) learn to solve complex resource-allocation problems equivalent to difficult computational puzzles like Sudoku.
Based on the above, I think learning networks arise frequently and spontaneusly in contexts involving biological life. This includes multicellular organisms, ecosystems and human networks such as economies, societies and civilizations. As of now, I don’t have an easy way to evaluate the power of these networks but the potential implications seem large.
- 20 Jun 2022 14:22 UTC; 6 points) 's comment on Open & Welcome Thread—June 2022 by (
- 23 Jun 2022 13:22 UTC; 3 points) 's comment on A Quick List of Some Problems in AI Alignment As A Field by (
- 19 Jun 2022 18:08 UTC; 1 point) 's comment on [Link-post] On Deference and Yudkowsky’s AI Risk Estimates by (
- 23 Jun 2022 16:44 UTC; 1 point) 's comment on Aleksi Liimatainen’s Shortform by (
Clearly the connectivity of neurons is important in brains, and it’s quite likely that changes in connectivity between nearby nodes affects the operation of those nodes, and therefore the whole. I haven’t watched the video—is anyone claiming that the network results are different from the sum of behaviors of the components (including the context and relationships that they have with each other)?
The network results are no different from the sum of behaviors of the components (in the same sense as they work out the same in the brain). I was surprised to realize just how simple and general the principle was.
ETA: On closer reading, I may have answered somewhat past your question. Yes, changes in connectivity between nearby nodes affects the operation of those nodes, and therefore the whole. This is equally true in both cases as the abstract network dynamic is the same.