There’s also the case of harmful warning shots: for example, if it turns out that, upon seeing an AI do a scary but impressive thing, enough people/orgs/states go “woah, AI is powerful, I should make one!” or “I guess we’re doomed anyways, might as well stop thinking about safety and just enjoy making profit with AI while we’re still alive”, to offset the positive effect. This is totally the kind of thing that could be the case in our civilization.
I agree, that’s an important point. I probably worry more about your first possibility, as we are already seeing this effect today, and worry less about the second, which would require a level of resignation that I’ve rarely seen. Entities that are responsible would likely try to do something about it, but the ways this “we’re doomed, let’s profit” might happen are:
The warning shot comes from a small player and a bigger player feels urgency or feels threatened, in a situation where they have little control
There is no clear responsibility and there are many entities at the frontier, who think others are responsible and there’s no way to prevent them.
Another case of harmful warning shot is if the lesson learnt from it is “we need stronger AI systems to prevent this”. This probably goes in hand with a poor credit assignment.
There’s also the case of harmful warning shots: for example, if it turns out that, upon seeing an AI do a scary but impressive thing, enough people/orgs/states go “woah, AI is powerful, I should make one!” or “I guess we’re doomed anyways, might as well stop thinking about safety and just enjoy making profit with AI while we’re still alive”, to offset the positive effect. This is totally the kind of thing that could be the case in our civilization.
I agree, that’s an important point. I probably worry more about your first possibility, as we are already seeing this effect today, and worry less about the second, which would require a level of resignation that I’ve rarely seen. Entities that are responsible would likely try to do something about it, but the ways this “we’re doomed, let’s profit” might happen are:
The warning shot comes from a small player and a bigger player feels urgency or feels threatened, in a situation where they have little control
There is no clear responsibility and there are many entities at the frontier, who think others are responsible and there’s no way to prevent them.
Another case of harmful warning shot is if the lesson learnt from it is “we need stronger AI systems to prevent this”. This probably goes in hand with a poor credit assignment.