My guess is in the case of AI warning shots there will also be some other alternative explanations like “Oh, the problem was just that this company’s CEO was evil, nothing more general about AI systems”.
I agree; that seems to be a significant risk. In case we get lucky to have AI warning shots, it seems prudent to think about how it can be ensured that they are recognized for what they are. This is a problem that I havn’t given much thought to before.
But I find it encouraging to think that we can use warning shots in other fields to understand the dynamics of how such events are being interpreted. As of now, I don’t think AI warning shots would change much, but I would add this potential for learning as a potential counter-argument. I think this seems analogous to the argument “EAs will get better at influencing the government over time” from another comment.
My guess is in the case of AI warning shots there will also be some other alternative explanations like “Oh, the problem was just that this company’s CEO was evil, nothing more general about AI systems”.
I agree; that seems to be a significant risk. In case we get lucky to have AI warning shots, it seems prudent to think about how it can be ensured that they are recognized for what they are. This is a problem that I havn’t given much thought to before.
But I find it encouraging to think that we can use warning shots in other fields to understand the dynamics of how such events are being interpreted. As of now, I don’t think AI warning shots would change much, but I would add this potential for learning as a potential counter-argument. I think this seems analogous to the argument “EAs will get better at influencing the government over time” from another comment.