Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.
Yeah, would be pretty keen to see more work trying to do this for AI risk/safety questions specifically: contrasting what different lenses “see” and emphasize, and what productive they critiques they have to offer to each other.
Over the last couple of years, valuable progress has been made towards stating the (more classical) AI risk/safety arguments more clearly, and I think that’s very productive for leading to better discourse (including critiques of those ideas). I think we’re a bit behind on developing clear articulations of the complex systems/emergent risk/multi-multi/”messy transitions” angle on AI risk/safety, and also that progress on this would be productive on many fronts.
If I’m not mistaken there is some work on this in progress from CAIF (?), but I think more is needed.