I think most comments regarding the covid analogy miss the point made in the post. Leopold makes the case that there will be a societal moment of realization and not that specific measures regarding covid were good and this should give us hope.
Right now talking about AI risk is like yelling about covid in Feb 2020.
I agree with this & there likely being a wake-up moment. This seems important to realize!
I think unless one has both an extremely fast takeoff model and doesn’t expect many more misaligned AI models with increases in capabilities to be released before takeoff, one should expect at least one, plausibly several wakeup moments as we had it with covid. In one way or another in this scenario, AIs are going to do something that their designers or operators didn’t want while it does not yet lead to an existential catastrophe. I’m not clear on what will happen & how society will respond to it but it seems likely that a lot of people working on AI safety should prepare for this moment, especially if you have a platform. This is when people will start to listen.
As for the arguments that specific responses won’t be helpful: I’m skeptical of both positive and negative takes made with any confidence since the analogies to specific measures in response to covid or climate change don’t seem well grounded to me.
I think most comments regarding the covid analogy miss the point made in the post. Leopold makes the case that there will be a societal moment of realization and not that specific measures regarding covid were good and this should give us hope.
I agree with this & there likely being a wake-up moment. This seems important to realize!
I think unless one has both an extremely fast takeoff model and doesn’t expect many more misaligned AI models with increases in capabilities to be released before takeoff, one should expect at least one, plausibly several wakeup moments as we had it with covid. In one way or another in this scenario, AIs are going to do something that their designers or operators didn’t want while it does not yet lead to an existential catastrophe. I’m not clear on what will happen & how society will respond to it but it seems likely that a lot of people working on AI safety should prepare for this moment, especially if you have a platform. This is when people will start to listen.
As for the arguments that specific responses won’t be helpful: I’m skeptical of both positive and negative takes made with any confidence since the analogies to specific measures in response to covid or climate change don’t seem well grounded to me.