I forgive the ambiguity in definitions because: 1. they’re dealing with frontier scientific problems and are thus still trying to hone in on what the right questions/methods even are to study a set of intuitively similar phenomena 2. it’s more productive to focus on how much optimization is going into advancing the field (money, minds, time, etc.) and where the field as a whole intends to go: understanding systems at least as difficult to model as minds, in a way that’s general enough to apply to cities, the immune system, etc.
I’d be surprised if they didn’t run into some of the same theoretical problems involved in solving alignment. (I wouldn’t be very surprised if complexity scientists make more progress on alignment than existing alignment researchers. It’s hard to bet against institutions of interdisciplinary scientists in close communication with one another already applying empirical work and exploring where the information theorists and physicists of the previous century left off to study life and minds).
That being said, John Holland (one of the pioneers of complexity science, invented genetic algorithms) has written several books on the subject and has made attempts to lay some groundwork for how the field should be studied. I think he’d probably say complex adaptive systems are an updating interaction network of ‘agents’ with world models trying to lower some kind of fitness function (huge oversimplification of course). He’d probably also emphasize the combinatorial nature of adaptation: structures (schemata ~ abstractions ~ innovations) can be found via mutation-like processes, assembled, tiled, and disassembled.
So we’ve got the textbook you mentioned which talks about co-evolving multilayer networks, Holland’s adapting network of agents, and Krakauer’s ‘teleonomic matter’ description. People might toss in other properties like diversity of entities, flows and cycles of some kind of resource (money, energy, etc), ‘emergence’, self-organization, etc.
I think those seem fine. I’d probably say that complex systems is something like the academic-child of frontier information theory and physics which focuses on the counterfactual evolutions of non-stationary information flows (infodynamics) when the environment contains sources and sinks of information, and when it contains memory/compression systems. Once you introduce memory and compression into the universe, information about the past and the future are allowed to interact, as well as counterfactuals. Downstream of that, I suspect, is theory of mind, acausal decision theories, embedded agency, etc. Memory systems are also ‘lags’ in the flow of information, which then changes what the geodesic looks like for bits.
I suspect a science of complexity will involve a lot of concepts from physics like work, energy, entropy, etc. - but they’ll involve more generalized, parameterized forms which need to be fit via empirical data from a particular complex system you desire to study.
I think alignment might get a lot easier once we understand “infodynamics” far better, and the above drawing is of what I see as my current three-step, high level plan: (1) find ways to track/measure the heat signautures (really ‘information’ signatures) of optimization processes/intelligences (2) develop (open, driven-dissipative) environments that allow intelligences of different scales to move information around and interact with one another to get empirical data on how the ‘information-spacetime’ changes (3) extract distilled models of the phenomena/laws going on so that rapid modeling can occur to explore the space of minds with these abstract models
And I’m guessing non-stationary information theory, statistical field theory, active inference/free energy principle, constructor theory (or something like it), random matrix theory, information geometry, tropical geometry, and optimal transport are all also good to look into, as well as adjacent fields based on your instinct. That’s not intended to be covering the space elegantly, just a battery of things in the associative web near what might be good to look into. Combinatorics, topology, fractals and fields are where it’s at.
I have more resources/thoughts on this but I’ll leave it at that for now unless someone’s interested. The best resource is the will to understand and the audacity to think you can, of course
You seem to be knowledgeable in this area, what would you recommend someone read to get a good picture of things you find interesting in complex systems theory?
I didn’t personally go about it in the most principled way, but: 1. locate the smartest minds in the field or tangential to it (surely you know of Friston and Levin, and you mentioned Krakauer—there’s a handful more. I just had a sticky note of people I collected) 2. locate a few of the seminal papers in the field, the journals (e.g. entropy) 3. based on your tastes, skim podcasts like Santa Fe’s or Sean Carroll’s 4. textbooks (e.g. that theory of cas book you mentioned (chapter 6 on info theory for cas seemed like the most important if i had to pick one), multilayer networks theory, statistical field theory (for neural networks, etc.)) - I personally also save books which seem a bit distanced from alignment (e.g. theoretical ecology/metabolic theory of ecology) just out of curiosity to see how they think/what questions they wind up asking
Of these, I think getting a feel for the repeats in the unsolved problems/vocabulary/concepts that show up in journals is important, and pretty much anything related to “what does it take to unify information and physics, and extend information theory to talk about open-systems, and how do we get there asap” seems good bc of how foundational all that is.
I’m not expecting to pull off all three, exactly—I’m hoping that as I go on, it becomes legible enough for ‘nature to take care of itself’ (other people start exploring the questions as well because it’s become more tractable (meta note: wanting to learn how to have nature take care of itself is a very complexity scientist thing to want)) or that I find a better question to answer.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
Maybe too idealistic, but I’m hoping to find signs of critical dynamics in models during certain kinds of tasks and I’d also like to observe some models with more memory dominate other models (in terms of which model diverges more from its start state to the other model’s) etc. - Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Also unsurprising from the comp-mech point of view I’m told.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
I’m curious about the technical details here, if you’re willing to provide them (privately is fine too).
I forgive the ambiguity in definitions because:
1. they’re dealing with frontier scientific problems and are thus still trying to hone in on what the right questions/methods even are to study a set of intuitively similar phenomena
2. it’s more productive to focus on how much optimization is going into advancing the field (money, minds, time, etc.) and where the field as a whole intends to go: understanding systems at least as difficult to model as minds, in a way that’s general enough to apply to cities, the immune system, etc.
I’d be surprised if they didn’t run into some of the same theoretical problems involved in solving alignment. (I wouldn’t be very surprised if complexity scientists make more progress on alignment than existing alignment researchers. It’s hard to bet against institutions of interdisciplinary scientists in close communication with one another already applying empirical work and exploring where the information theorists and physicists of the previous century left off to study life and minds).
That being said, John Holland (one of the pioneers of complexity science, invented genetic algorithms) has written several books on the subject and has made attempts to lay some groundwork for how the field should be studied.
I think he’d probably say complex adaptive systems are an updating interaction network of ‘agents’ with world models trying to lower some kind of fitness function (huge oversimplification of course). He’d probably also emphasize the combinatorial nature of adaptation: structures (schemata ~ abstractions ~ innovations) can be found via mutation-like processes, assembled, tiled, and disassembled.
So we’ve got the textbook you mentioned which talks about co-evolving multilayer networks, Holland’s adapting network of agents, and Krakauer’s ‘teleonomic matter’ description. People might toss in other properties like diversity of entities, flows and cycles of some kind of resource (money, energy, etc), ‘emergence’, self-organization, etc.
I think those seem fine. I’d probably say that complex systems is something like the academic-child of frontier information theory and physics which focuses on the counterfactual evolutions of non-stationary information flows (infodynamics) when the environment contains sources and sinks of information, and when it contains memory/compression systems. Once you introduce memory and compression into the universe, information about the past and the future are allowed to interact, as well as counterfactuals. Downstream of that, I suspect, is theory of mind, acausal decision theories, embedded agency, etc. Memory systems are also ‘lags’ in the flow of information, which then changes what the geodesic looks like for bits.
I suspect a science of complexity will involve a lot of concepts from physics like work, energy, entropy, etc. - but they’ll involve more generalized, parameterized forms which need to be fit via empirical data from a particular complex system you desire to study.
I think alignment might get a lot easier once we understand “infodynamics” far better, and the above drawing is of what I see as my current three-step, high level plan:
(1) find ways to track/measure the heat signautures (really ‘information’ signatures) of optimization processes/intelligences
(2) develop (open, driven-dissipative) environments that allow intelligences of different scales to move information around and interact with one another to get empirical data on how the ‘information-spacetime’ changes
(3) extract distilled models of the phenomena/laws going on so that rapid modeling can occur to explore the space of minds with these abstract models
Here are some resources:
1. The journal entropy (this specifically links to a paper co-authored by D. Wolpert, the guy who helped come up with the No Free Lunch Theorem)
2. John Holland’s books or papers (though probably outdated and he’s just one of the first people looking into complexity as a science—you can always start at the origin and let your tastes guide you from there)
3. Introduction to the Theory of Complex Systems and Applying the Free-Energy Principle to Complex Adaptive Systems (one of the sections talks about something an awful lot like embedded agency in a lot more detail)
4. The Energetics of Computing in Life and Machines
And I’m guessing non-stationary information theory, statistical field theory, active inference/free energy principle, constructor theory (or something like it), random matrix theory, information geometry, tropical geometry, and optimal transport are all also good to look into, as well as adjacent fields based on your instinct. That’s not intended to be covering the space elegantly, just a battery of things in the associative web near what might be good to look into. Combinatorics, topology, fractals and fields are where it’s at.
I have more resources/thoughts on this but I’ll leave it at that for now unless someone’s interested. The best resource is the will to understand and the audacity to think you can, of course
You seem to be knowledgeable in this area, what would you recommend someone read to get a good picture of things you find interesting in complex systems theory?
I didn’t personally go about it in the most principled way, but:
1. locate the smartest minds in the field or tangential to it (surely you know of Friston and Levin, and you mentioned Krakauer—there’s a handful more. I just had a sticky note of people I collected)
2. locate a few of the seminal papers in the field, the journals (e.g. entropy)
3. based on your tastes, skim podcasts like Santa Fe’s or Sean Carroll’s
4. textbooks (e.g. that theory of cas book you mentioned (chapter 6 on info theory for cas seemed like the most important if i had to pick one), multilayer networks theory, statistical field theory (for neural networks, etc.)) - I personally also save books which seem a bit distanced from alignment (e.g. theoretical ecology/metabolic theory of ecology) just out of curiosity to see how they think/what questions they wind up asking
Of these, I think getting a feel for the repeats in the unsolved problems/vocabulary/concepts that show up in journals is important, and pretty much anything related to “what does it take to unify information and physics, and extend information theory to talk about open-systems, and how do we get there asap” seems good bc of how foundational all that is.
How do you intend to do those 3 things? In particular, 1 seems pretty cool if you can pull it off.
I’m not expecting to pull off all three, exactly—I’m hoping that as I go on, it becomes legible enough for ‘nature to take care of itself’ (other people start exploring the questions as well because it’s become more tractable (meta note: wanting to learn how to have nature take care of itself is a very complexity scientist thing to want)) or that I find a better question to answer.
For the first one, I’m currently making a suite of long-running games/tasks to generate streams of data from LLMs (and some other kinds of algorithms too, like basic RL and genetic algorithms eventually) and am running some techniques borrowed from financial analysis and signal processing (etc) on them because of some intuitions built from experience with models as well as what other nearby fields do
Maybe too idealistic, but I’m hoping to find signs of critical dynamics in models during certain kinds of tasks and I’d also like to observe some models with more memory dominate other models (in terms of which model diverges more from its start state to the other model’s) etc. - Anthropic’s power laws for scaling are sort of unsurprising, in a certain sense, if you know how ubiquitous some kinds of relationships are given some kinds of underlying dynamics (e.g. minimizing cost dynamics)
Also unsurprising from the comp-mech point of view I’m told.
I’m curious about the technical details here, if you’re willing to provide them (privately is fine too).
Yeah, I’d be happy to.
I’m working on a post for it as well + hope to make it so others can try experiments of their own—but I can DM you.