It’s true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer’s way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.
What I’d really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I’m thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a “better” answer compared to another. This would be probably also quite a research (software) engineering task.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.
This is a shorter 30-min intro to complexity science by David Krakauer that I really liked: https://www.youtube.com/watch?v=FBkFu1g5PlE&t=358s
It’s true that the way some people define and talk about complex systems can be frustratingly vague and non-informative, and I agree that Krakauer’s way of talking about it gives a big picture idea that, in my view, has some appeal/informativeness.
What I’d really like to see a lot more is explicit model comparison where problems are seen from the complex systems lens vs. from other lenses. Yes, there are single examples where economics complexity is contrasted with traditional economics, but I’m thinking about something way more comprehensive and systematic here, i.e., taking a problem that can be approached with various methodologies, all implemented within the same computational environment, and investigating what answers each of those give, ideally with a clear idea of what constitutes a “better” answer compared to another. This would be probably also quite a research (software) engineering task.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.