How is that relevant? It’s about whether AI risk will be mainstream. I’m thinking about governance interventions by this community, which doesn’t require the rest of the world to appreciate AI risk.
I assumed, evidently incorrectly, that the point was to prompt government planners and policymakers with clear ideas now, and say that they will be relevant once X happens—and I don’t think that there is an X such that they will be convinced, short of actual catastrophe.
It now sounds like you looking to do conditional planning for future governance interventions. I’m not sure if that makes sense—it seems pretty clear that groundwork and planning on governance splits between near-term / fast takeoff, and later / slow takeoff, and we’ve been getting clear indications that we’re nearer to the former than the latter—but we aren’t going to develop the interventions materially differently based on specific metrics, since the worlds where almost any of the interventions are effective are not going to be sensitive to that level of detail.
(I agree in part, but (1) planning for far/slow worlds is still useful and (2) I meant more like metrics or model evaluations are part of an intervention, e.g. incorporated into safety standards than metrics inform what we try to do.)
https://intelligence.org/2017/10/13/fire-alarm/
How is that relevant? It’s about whether AI risk will be mainstream. I’m thinking about governance interventions by this community, which doesn’t require the rest of the world to appreciate AI risk.
I assumed, evidently incorrectly, that the point was to prompt government planners and policymakers with clear ideas now, and say that they will be relevant once X happens—and I don’t think that there is an X such that they will be convinced, short of actual catastrophe.
It now sounds like you looking to do conditional planning for future governance interventions. I’m not sure if that makes sense—it seems pretty clear that groundwork and planning on governance splits between near-term / fast takeoff, and later / slow takeoff, and we’ve been getting clear indications that we’re nearer to the former than the latter—but we aren’t going to develop the interventions materially differently based on specific metrics, since the worlds where almost any of the interventions are effective are not going to be sensitive to that level of detail.
Interesting, thanks.
(I agree in part, but (1) planning for far/slow worlds is still useful and (2) I meant more like metrics or model evaluations are part of an intervention, e.g. incorporated into safety standards than metrics inform what we try to do.)