I think you make a fair point—one common trope when thinking of how AI can go wrong is imagining military drones and weapons. But in this story, the drones are arguably less important of a development than the propaganda manipulation enabled by the multimodal “GPT-4”.
Which is not to say that the drones are entirely without causal blame- they are enabling genocide at unprecendented scale, and removing the possibility for humans to disobey orders.
Propaganda has existed long before social media and were instrumental in creating the worst atrocities of the 20th century, in this regard I fail to see what’s so importantly different about using AI to do it. In particular, I do not expect advanced language models to be especially effective at it, at least for long. Instead, I expect GPT-spamming will—if scaling laws hold and GPT-n actually turns out to be technically impressive—lead to a rapid decline of social media usage and anonymous communication in general (which would paradoxically be a good thing imo). You can’t dump a million tons of gold on the market and expect the price to hold, even if it’s really, really authentic gold.
I find the point about drones enabling genocide at unprecedented scale is the much more important one. Unfortunately, I think the story fails to capture this point since China is just about the worst setting for the story, a country that A) is already a big global player even without considering drones/AI and B) already has the capacity to tightly control its populace and carry out atrocities through human operators.
I think a story that better demonstrates the game-changing effect of drones is one where a previously unremarkable group/organization suddenly acquires unexpectedly large influence over the world through violent means, and doing so while bypassing the traditional requirement of having to use social maneuvering to control a large number of human actors to do your bidding. The recent real-world conflict of Nagorno-Karabakh comes closer as an example.
I’d suggest a closer reading of historical narratives before talking about “genocide at unprecedented scale”. The scary thing about AI and truly autonomous systems is not that they’re efficient or larger-scale at horrors than humans are, but that they can cause unexpected and unstoppable genocide (or slavery, torture, etc.).
Thank you for reading it!
I think you make a fair point—one common trope when thinking of how AI can go wrong is imagining military drones and weapons. But in this story, the drones are arguably less important of a development than the propaganda manipulation enabled by the multimodal “GPT-4”.
Which is not to say that the drones are entirely without causal blame- they are enabling genocide at unprecendented scale, and removing the possibility for humans to disobey orders.
Propaganda has existed long before social media and were instrumental in creating the worst atrocities of the 20th century, in this regard I fail to see what’s so importantly different about using AI to do it. In particular, I do not expect advanced language models to be especially effective at it, at least for long. Instead, I expect GPT-spamming will—if scaling laws hold and GPT-n actually turns out to be technically impressive—lead to a rapid decline of social media usage and anonymous communication in general (which would paradoxically be a good thing imo). You can’t dump a million tons of gold on the market and expect the price to hold, even if it’s really, really authentic gold.
I find the point about drones enabling genocide at unprecedented scale is the much more important one. Unfortunately, I think the story fails to capture this point since China is just about the worst setting for the story, a country that A) is already a big global player even without considering drones/AI and B) already has the capacity to tightly control its populace and carry out atrocities through human operators.
I think a story that better demonstrates the game-changing effect of drones is one where a previously unremarkable group/organization suddenly acquires unexpectedly large influence over the world through violent means, and doing so while bypassing the traditional requirement of having to use social maneuvering to control a large number of human actors to do your bidding. The recent real-world conflict of Nagorno-Karabakh comes closer as an example.
I’d suggest a closer reading of historical narratives before talking about “genocide at unprecedented scale”. The scary thing about AI and truly autonomous systems is not that they’re efficient or larger-scale at horrors than humans are, but that they can cause unexpected and unstoppable genocide (or slavery, torture, etc.).
The drones seem to be doing it retail. In a former time, it was done by the trainload, and “a transport was dealt with in two hours”.