I don’t think the response to Covid should give us reason to be optimistic about our effectiveness at dealing with the threat from AI. Quite the opposite. Much of the measures taken were known to be useless from the start, like masks, while others were ineffective or harmful like shutting down schools or giving vaccine to young people who were not at risk of dying from Covid.
Everything can be explained by the incentives our politicians have to do anything. They want to be seen to take important questions seriously while not upsetting their doners in the pharma industry.
I can easily imagine something similar happening if the voters becomes concerned about AI. Some ineffective legislation dictated by Big tech.
I think most comments regarding the covid analogy miss the point made in the post. Leopold makes the case that there will be a societal moment of realization and not that specific measures regarding covid were good and this should give us hope.
Right now talking about AI risk is like yelling about covid in Feb 2020.
I agree with this & there likely being a wake-up moment. This seems important to realize!
I think unless one has both an extremely fast takeoff model and doesn’t expect many more misaligned AI models with increases in capabilities to be released before takeoff, one should expect at least one, plausibly several wakeup moments as we had it with covid. In one way or another in this scenario, AIs are going to do something that their designers or operators didn’t want while it does not yet lead to an existential catastrophe. I’m not clear on what will happen & how society will respond to it but it seems likely that a lot of people working on AI safety should prepare for this moment, especially if you have a platform. This is when people will start to listen.
As for the arguments that specific responses won’t be helpful: I’m skeptical of both positive and negative takes made with any confidence since the analogies to specific measures in response to covid or climate change don’t seem well grounded to me.
This. IIRC by ~April 2020 there were some researchers and experts asking why none of the prevention measures were focused on improving ventilation in public spaces. By ~June the same was happening for the pretty clear evidence that covid was airborne and not transmitted by surfaces (parks near me were closing off their outdoor picnic tables as a covid measure!).
And of course, we can talk about “warp speed” vaccine development all we like, but if we had had better public policy over the last 30 years, Moderna would likely have been already focusing on infectious disease instead of cancer, and had multiple well-respected, well-tested, well-trusted mRNA vaccines on the market, so that the needed regulatory and physical infrastructures could have already been truly ready to go in January when they designed their covid vaccine. We haven’t learned these lessons even after the fact. We haven’t improved our institutions for next time. We haven’t educated the public or our leaders. We seem to have decided to pretend covid was close to a worst case scenario for a pandemic, instead of realizing that there can be much more deadly diseases, and much more rapidly spread diseases.
AI seems...about the same to me in how the public is reacting so far? Lots of concerns about job losses or naughty words, so that’s what companies and legislators are incentivized to (be seen as trying to) fix, and most people either treat very bad outcomes as too outlandish to discuss, or treat not-so-very-small probabilities as too unlikely to worry about regardless of how bad they’d be if they happened.
I don’t think the response to Covid should give us reason to be optimistic about our effectiveness at dealing with the threat from AI. Quite the opposite. Much of the measures taken were known to be useless from the start, like masks, while others were ineffective or harmful like shutting down schools or giving vaccine to young people who were not at risk of dying from Covid.
Everything can be explained by the incentives our politicians have to do anything. They want to be seen to take important questions seriously while not upsetting their doners in the pharma industry.
I can easily imagine something similar happening if the voters becomes concerned about AI. Some ineffective legislation dictated by Big tech.
I think most comments regarding the covid analogy miss the point made in the post. Leopold makes the case that there will be a societal moment of realization and not that specific measures regarding covid were good and this should give us hope.
I agree with this & there likely being a wake-up moment. This seems important to realize!
I think unless one has both an extremely fast takeoff model and doesn’t expect many more misaligned AI models with increases in capabilities to be released before takeoff, one should expect at least one, plausibly several wakeup moments as we had it with covid. In one way or another in this scenario, AIs are going to do something that their designers or operators didn’t want while it does not yet lead to an existential catastrophe. I’m not clear on what will happen & how society will respond to it but it seems likely that a lot of people working on AI safety should prepare for this moment, especially if you have a platform. This is when people will start to listen.
As for the arguments that specific responses won’t be helpful: I’m skeptical of both positive and negative takes made with any confidence since the analogies to specific measures in response to covid or climate change don’t seem well grounded to me.
This. IIRC by ~April 2020 there were some researchers and experts asking why none of the prevention measures were focused on improving ventilation in public spaces. By ~June the same was happening for the pretty clear evidence that covid was airborne and not transmitted by surfaces (parks near me were closing off their outdoor picnic tables as a covid measure!).
And of course, we can talk about “warp speed” vaccine development all we like, but if we had had better public policy over the last 30 years, Moderna would likely have been already focusing on infectious disease instead of cancer, and had multiple well-respected, well-tested, well-trusted mRNA vaccines on the market, so that the needed regulatory and physical infrastructures could have already been truly ready to go in January when they designed their covid vaccine. We haven’t learned these lessons even after the fact. We haven’t improved our institutions for next time. We haven’t educated the public or our leaders. We seem to have decided to pretend covid was close to a worst case scenario for a pandemic, instead of realizing that there can be much more deadly diseases, and much more rapidly spread diseases.
AI seems...about the same to me in how the public is reacting so far? Lots of concerns about job losses or naughty words, so that’s what companies and legislators are incentivized to (be seen as trying to) fix, and most people either treat very bad outcomes as too outlandish to discuss, or treat not-so-very-small probabilities as too unlikely to worry about regardless of how bad they’d be if they happened.