I really dislike the term “warning shot,” and I’m trying to get it out of my vocabulary. I understand how it came to be a term people use. But, if we think it might actually be something that happens, and when it happens, it plausibly and tragically results the deaths of many folks, isn’t the right term “mass casualty event” ?
I think many mass casualty events would be warning shots, but not all warning shots would be mass casualty events. I think an agentic AI system getting most of the way towards escaping containment or a major fraud being perpetrated by an AI system would both be meaningful warning shots, but wouldn’t involve mass casualties.
I do agree with what I think you are pointin at, which is that there is something Orwellian about the “warning shot” language. Like, in many of these scenarios we are talking about large negative consequences, and it seems good to have a word that owns that (in-particular in as much as people are thinking about making warning shots more likely before an irrecoverable catastrophe occurs).
I totally think it’s true that there are warning shots that would be non-mass-casualty events, to be clear, and I agree that the scenarios you note could maybe be those.
(I was trying to use “plausibly” to gesture at a wide range of scenarios, but I totally agree the comment as written isn’t clearly meaning that).
I don’t think folks intended anything Orwellian, just sort of something we stumbled into, and heck, if we can both be less Orwellian and be more compelling policy advocates at the same time, why not, I figure.
I think a lot of people losing their jobs would probably do the trick, politics-wise. For most people the crux is “willAIs will be more capable than humans”, not “mightAIs more capable than humans be dangerous”.
You know, you’re not the first person to make that argument to me recently. I admit that I find it more persuasive than I used to.
Put another way: “will AI take all the jobs” is another way of saying* “will I suddenly lose the ability to feed and protect those I love.” It’s an apocalypse in microcosm, and it’s one that doesn’t require a lot of theory to grasp.
*Yes, yes, you could imagine universal basic income or whatever. Do you think the average person is Actually Expecting to Get That ?
I really dislike the term “warning shot,” and I’m trying to get it out of my vocabulary. I understand how it came to be a term people use. But, if we think it might actually be something that happens, and when it happens, it plausibly and tragically results the deaths of many folks, isn’t the right term “mass casualty event” ?
I think many mass casualty events would be warning shots, but not all warning shots would be mass casualty events. I think an agentic AI system getting most of the way towards escaping containment or a major fraud being perpetrated by an AI system would both be meaningful warning shots, but wouldn’t involve mass casualties.
I do agree with what I think you are pointin at, which is that there is something Orwellian about the “warning shot” language. Like, in many of these scenarios we are talking about large negative consequences, and it seems good to have a word that owns that (in-particular in as much as people are thinking about making warning shots more likely before an irrecoverable catastrophe occurs).
I totally think it’s true that there are warning shots that would be non-mass-casualty events, to be clear, and I agree that the scenarios you note could maybe be those.
(I was trying to use “plausibly” to gesture at a wide range of scenarios, but I totally agree the comment as written isn’t clearly meaning that).
I don’t think folks intended anything Orwellian, just sort of something we stumbled into, and heck, if we can both be less Orwellian and be more compelling policy advocates at the same time, why not, I figure.
I think a lot of people losing their jobs would probably do the trick, politics-wise. For most people the crux is “will AIs will be more capable than humans”, not “might AIs more capable than humans be dangerous”.
You know, you’re not the first person to make that argument to me recently. I admit that I find it more persuasive than I used to.
Put another way: “will AI take all the jobs” is another way of saying* “will I suddenly lose the ability to feed and protect those I love.” It’s an apocalypse in microcosm, and it’s one that doesn’t require a lot of theory to grasp.
*Yes, yes, you could imagine universal basic income or whatever. Do you think the average person is Actually Expecting to Get That ?