The thing that made AI risk “real” for me was a report of an event that turned out not to have happened (seemingly just a miscommunication). My brain was already very concerned, but my gut had not caught up until then. That said, I do not think this should be taken as a norm, for three reasons:
Creating hoaxes in support of a cause is a good way to turn a lot of people against a cause
In general, if you feel a need to fake evidence for your position, that is itself is weak evidence against your position
I don’t like dishonesty
If AI capabilities continue to progress and if AI x-risk is a real problem (which I think it is, credence ~95%), then I hope we get a warning shot. But I think a false flag “warning shot” has negative utility.
The thing that made AI risk “real” for me was a report of an event that turned out not to have happened (seemingly just a miscommunication). My brain was already very concerned, but my gut had not caught up until then. That said, I do not think this should be taken as a norm, for three reasons:
Creating hoaxes in support of a cause is a good way to turn a lot of people against a cause
In general, if you feel a need to fake evidence for your position, that is itself is weak evidence against your position
I don’t like dishonesty
If AI capabilities continue to progress and if AI x-risk is a real problem (which I think it is, credence ~95%), then I hope we get a warning shot. But I think a false flag “warning shot” has negative utility.