Do you know this textbook? I’d say it’s a good overview of the “complex systems modelling toolbox”.
I will note that I mostly bounced off the mentioned textbook when I was trying to understand what complex systems theory is. Habryka and I may just have different cruxes, because he seems very concerned here about the methodology of the field, and the book definitely is a collection of concepts and math that complex systems theorists apparently found useful, but it wasn’t big-picture enough for me when I was just very confused about what the actual goal of the field was.
I decided to listen to the podcast, and found it a far better pitch for the field than the textbook. Previously, when I tried to figure out what complex systems theorists were doing, I was never able to get an explanation which ever took a stand for what wasn’t subject to complex systems theory, other than of course any simplification whatsoever of the object you’re trying to study.
For example, the textbook has the following definition
which, just, as far as I can tell you can express anything you want in terms of.
In contrast, David gives this definition on the podcast (bolding my own)
0:06:45.9 DK: Yeah, so the important point is to recognize that we need a fundamentally new set of ideas where the world we’re studying is a world with endogenous ideas. We have to theorize about theorizers and that makes all the difference. And so notions of agency or reflexivity, these kinds of words we use to denote self-awareness or what does a mathematical theory look like when that’s an unavoidable component of the theory. Feynman and Murray both made that point. Imagine how hard physics would be if particles could think. That is essentially the essence of complexity. And whether it’s individual minds or collectives or societies, it doesn’t really matter. And we’ll get into why it doesn’t matter, but for me at least, that’s what complexity is. The study of teleonomic matter. That’s the ontological domain. And of course that has implications for the methods we use. And we can use arithmetic but we can also use agent-based models, right? In other words, I’m not particularly restrictive in my ideas about epistemology, but there’s no doubt that we need new epistemology for theorizers. I think that’s quite clear.
And he right away gives the example of a hurricane as a complex chaotic process that he would claim is not a complex system, in the sense he’s using the term.
No, I don’t think [a hurricane] counts. I think it’s not useful. There was in the early days at SFI this desire to distinguishing complex systems and complex adaptive systems. And I think that’s just become sort of irrelevant. And in order for the field to stand on its own, I think we have to recognize that there is a shared very particular characteristic of all complex systems. And that is they internally encode the world in which they live. And whether that’s a computer or a genome in a microbe, or neurons in a brain, that’s the coherent common denominator, not self-organizing patterns that you might find for example, in a hurricane or a vortex or, those are very important elements, but they’re not sufficient.
In this framing, it becomes very clear why one would think biology, economics, evolution, neuroscience, and AI would be connected enough to form a field out of studying the intersection, and why many agent foundations people would be gravitating towards it.
It’s true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer’s way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.
What I’d really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I’m thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a “better” answer compared to another. This would be probably also quite a research (software) engineering task.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.
I will note that I mostly bounced off the mentioned textbook when I was trying to understand what complex systems theory is. Habryka and I may just have different cruxes, because he seems very concerned here about the methodology of the field, and the book definitely is a collection of concepts and math that complex systems theorists apparently found useful, but it wasn’t big-picture enough for me when I was just very confused about what the actual goal of the field was.
I decided to listen to the podcast, and found it a far better pitch for the field than the textbook. Previously, when I tried to figure out what complex systems theorists were doing, I was never able to get an explanation which ever took a stand for what wasn’t subject to complex systems theory, other than of course any simplification whatsoever of the object you’re trying to study.
For example, the textbook has the following definition
which, just, as far as I can tell you can express anything you want in terms of.
In contrast, David gives this definition on the podcast (bolding my own)
And he right away gives the example of a hurricane as a complex chaotic process that he would claim is not a complex system, in the sense he’s using the term.
In this framing, it becomes very clear why one would think biology, economics, evolution, neuroscience, and AI would be connected enough to form a field out of studying the intersection, and why many agent foundations people would be gravitating towards it.
This is a shorter 30-min intro to complexity science by David Krakauer that I really liked: https://www.youtube.com/watch?v=FBkFu1g5PlE&t=358s
It’s true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer’s way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.
What I’d really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I’m thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a “better” answer compared to another. This would be probably also quite a research (software) engineering task.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.