I agree that there often seems to be something very shallow about the methods, but I don’t necessarily hold that against them. Many very useful results come from very shallow claims, and my impression is complexity science is often pretty useful at its best, and its wrong to consider their average performers or the current & past hypes.
Comparing to work done by the alignment-style agent-foundationists, you’ll probably be disappointed about the “deepness”, but I do think impressed by the applicability.
The test of a good idea is how often you find yourself coming back to it despite not thinking it was all that useful when first learning it. This has happened many times to me after learning about many methods in complex systems theory.
Deep learning/AI was historically bottlenecked by things like (1) anti-hype (when single layer MLPs couldn’t do XOR and ~everyone just sort of gave up collectively) (2) lack of huge amounts of data/ability to scale
I think complexity science is in an analogous position. In its case, the ‘anti-hype’ is probably from a few people (probably physicists?) saying emergence or the edge of chaos is woo and everyone rolling with it resulting in the field becoming inert. Likewise, its version of ‘lack of data’ is that techniques like agent based modeling were studied using tools like NetLogo which are extremely simple. But we have deep learning now, and that bottleneck is lifted. It’s maybe a matter of time before more people realize you can revisit the phase space of techniques like that with new tools.
As a quick example: John Holland made a distilled model of complex systems called “Echo” which he described in Hidden Order and if you download NetLogo you can run it yourself. It’s a bit cute, actually—here’s an image of it:
But anyway, the point is that this is the best people could do at the time—there was an acknowledgement that some systems cannot be understood analytically in tractable ways, and predicting/controlling them would benefit from algorithmic approaches. So? An attempt was made to do it by making these hard-coded rules for agents. But now that we have deep learning, we can make beefed up artificial ecologies of our own to empirically study the external dynamics of systems. Though it still demands theoretical advancements (like figuring out the parameterized, generalized forms of physics equations and fitting them to empirical data to then model that system with deep learning ecologies).
If people find that the methods/ideas are lacking, they might be exploring the field in a suboptimal way/not investigating thoroughly. One thing I do is try to make some initial attempt to define things myself the best I can (e.g. try to formalize/locate the sharp left turn, edge of chaos, emergence, etc. and what it’d take to understand it) which adjusts my attention mechanism for looking for research, which lets me find niche content that’s surprisingly relevant (in my opinion). If you come at the field with far too much skepticism, you’re almost making yourself unnecessarily blind.
The difference between doing this and not might be substantial: if you don’t, you might take a look at the surface level of the field and just see hand-wavey attempts at connecting a bunch of disparate things—but if you do, you might notice that ‘robust delegation’, ‘modularity’, ‘abstractions’, ‘embedded agency’, etc. have undergone investigations from a number of angles already, and find public alignment research almost boring, not-even-wrong, or framed/approached in unprincipled ways in comparison (which I don’t blame them for—there’s something very anti-memetic about studying the hard parts of alignment theory leaving the field impoverished of size and diversity).
I agree that there often seems to be something very shallow about the methods, but I don’t necessarily hold that against them. Many very useful results come from very shallow claims, and my impression is complexity science is often pretty useful at its best, and its wrong to consider their average performers or the current & past hypes.
Comparing to work done by the alignment-style agent-foundationists, you’ll probably be disappointed about the “deepness”, but I do think impressed by the applicability.
The test of a good idea is how often you find yourself coming back to it despite not thinking it was all that useful when first learning it. This has happened many times to me after learning about many methods in complex systems theory.
Deep learning/AI was historically bottlenecked by things like
(1) anti-hype (when single layer MLPs couldn’t do XOR and ~everyone just sort of gave up collectively)
(2) lack of huge amounts of data/ability to scale
I think complexity science is in an analogous position. In its case, the ‘anti-hype’ is probably from a few people (probably physicists?) saying emergence or the edge of chaos is woo and everyone rolling with it resulting in the field becoming inert. Likewise, its version of ‘lack of data’ is that techniques like agent based modeling were studied using tools like NetLogo which are extremely simple. But we have deep learning now, and that bottleneck is lifted. It’s maybe a matter of time before more people realize you can revisit the phase space of techniques like that with new tools.
As a quick example: John Holland made a distilled model of complex systems called “Echo” which he described in Hidden Order and if you download NetLogo you can run it yourself. It’s a bit cute, actually—here’s an image of it:
But anyway, the point is that this is the best people could do at the time—there was an acknowledgement that some systems cannot be understood analytically in tractable ways, and predicting/controlling them would benefit from algorithmic approaches. So? An attempt was made to do it by making these hard-coded rules for agents. But now that we have deep learning, we can make beefed up artificial ecologies of our own to empirically study the external dynamics of systems. Though it still demands theoretical advancements (like figuring out the parameterized, generalized forms of physics equations and fitting them to empirical data to then model that system with deep learning ecologies).
If people find that the methods/ideas are lacking, they might be exploring the field in a suboptimal way/not investigating thoroughly. One thing I do is try to make some initial attempt to define things myself the best I can (e.g. try to formalize/locate the sharp left turn, edge of chaos, emergence, etc. and what it’d take to understand it) which adjusts my attention mechanism for looking for research, which lets me find niche content that’s surprisingly relevant (in my opinion). If you come at the field with far too much skepticism, you’re almost making yourself unnecessarily blind.
The difference between doing this and not might be substantial: if you don’t, you might take a look at the surface level of the field and just see hand-wavey attempts at connecting a bunch of disparate things—but if you do, you might notice that ‘robust delegation’, ‘modularity’, ‘abstractions’, ‘embedded agency’, etc. have undergone investigations from a number of angles already, and find public alignment research almost boring, not-even-wrong, or framed/approached in unprincipled ways in comparison (which I don’t blame them for—there’s something very anti-memetic about studying the hard parts of alignment theory leaving the field impoverished of size and diversity).
Any favourite examples?
I have been served well in the past by trying to re-frame problems in terms of networks, as an example.