The reaction seems consistent if people (in government) believe no warning shot was fired. AFAIK the official reading is that we experienced a zoonosis, so banning gain of function research would go against that narrative. It seems true to me that this should be seen as a warning shot, but smallpox and ebola could have prompted this discussion as well and also failed to be seen as a warning shot.
My guess is in the case of AI warning shots there will also be some other alternative explanations like “Oh, the problem was just that this company’s CEO was evil, nothing more general about AI systems”.
I agree; that seems to be a significant risk. In case we get lucky to have AI warning shots, it seems prudent to think about how it can be ensured that they are recognized for what they are. This is a problem that I havn’t given much thought to before.
But I find it encouraging to think that we can use warning shots in other fields to understand the dynamics of how such events are being interpreted. As of now, I don’t think AI warning shots would change much, but I would add this potential for learning as a potential counter-argument. I think this seems analogous to the argument “EAs will get better at influencing the government over time” from another comment.
The reaction seems consistent if people (in government) believe no warning shot was fired. AFAIK the official reading is that we experienced a zoonosis, so banning gain of function research would go against that narrative.
Governments are also largely neglecting vaccine tech/pipeline investments, which protect against zoonotic viruses, not just engineered ones.
But also, the conceptual gap between ‘a virus that was maybe a lab leak, maybe not’ and ‘a virus that was a lab leak’ is much smaller than the gap between the sort of AI systems we’re likely to get a ‘warning shot’ from (if the warning shot is early enough to matter) and misaligned superintelligent squiggle maximizers. So if we government can’t make the conceptual leap in the easy case, it’s even less likely to make it in the hard case.
It seems true to me that this should be seen as a warning shot, but smallpox and ebola could have prompted this discussion as well and also failed to be seen as a warning shot.
If there were other warning shots in addition to this one, that’s even worse! We’re already playing in Easy Mode here.
If there were other warning shots in addition to this one, that’s even worse! We’re already playing in Easy Mode here.
Playing devil’s advocate, if the government isn’t aware that the game is on, it doesn’t matter if it’s on easy mode—the performance is likely poor independent of the game’s difficulty.
I agree with the post’s sentiment that warning shots would currently not do much good. But I am, as of now, still somewhat hopeful that the bottleneck is getting the government to see and target a problem, not the government’s ability to act on an identified issue.
The reaction seems consistent if people (in government) believe no warning shot was fired. AFAIK the official reading is that we experienced a zoonosis, so banning gain of function research would go against that narrative. It seems true to me that this should be seen as a warning shot, but smallpox and ebola could have prompted this discussion as well and also failed to be seen as a warning shot.
My guess is in the case of AI warning shots there will also be some other alternative explanations like “Oh, the problem was just that this company’s CEO was evil, nothing more general about AI systems”.
I agree; that seems to be a significant risk. In case we get lucky to have AI warning shots, it seems prudent to think about how it can be ensured that they are recognized for what they are. This is a problem that I havn’t given much thought to before.
But I find it encouraging to think that we can use warning shots in other fields to understand the dynamics of how such events are being interpreted. As of now, I don’t think AI warning shots would change much, but I would add this potential for learning as a potential counter-argument. I think this seems analogous to the argument “EAs will get better at influencing the government over time” from another comment.
Governments are also largely neglecting vaccine tech/pipeline investments, which protect against zoonotic viruses, not just engineered ones.
But also, the conceptual gap between ‘a virus that was maybe a lab leak, maybe not’ and ‘a virus that was a lab leak’ is much smaller than the gap between the sort of AI systems we’re likely to get a ‘warning shot’ from (if the warning shot is early enough to matter) and misaligned superintelligent squiggle maximizers. So if we government can’t make the conceptual leap in the easy case, it’s even less likely to make it in the hard case.
If there were other warning shots in addition to this one, that’s even worse! We’re already playing in Easy Mode here.
Playing devil’s advocate, if the government isn’t aware that the game is on, it doesn’t matter if it’s on easy mode—the performance is likely poor independent of the game’s difficulty.
I agree with the post’s sentiment that warning shots would currently not do much good. But I am, as of now, still somewhat hopeful that the bottleneck is getting the government to see and target a problem, not the government’s ability to act on an identified issue.