Upvoted for being an important idea, but I actually disagree with the advice. The relationship of ideas to action is exceedingly complex, and I strongly doubt (but do not know how to test) that the idea simply hadn’t occurred to someone who wanted attention through harm.
I find it much more likely that there’s a large uncertainty in the effectiveness (in terms of attention to be had) of uncommon attacks, and when it’s not already in the public eye, it’s known but not considered as a reasonable mechanism. Much like cryogenics is weird and uncertain, even for people who would like to be revived, poisoning medicine (dangerous idea: why only medicine, not other foods?) was weird and uncertain until it had been shown to work.
I suspect the dangerous information is that it has succeeded at least once, and gotten a lot of press attention. This information is much harder (and less desirable) to suppress.
In the software world, ideas are rampant and cheap. Execution of the correct idea is the path to success. I expect it’s similar as a terrorist, except there are way fewer people to help you choose, refine, and change your ideas, so you only get one shot (as it were).
I also note a similarity to the disclosure debate about computer vulnerabilities—there’s a tension between publishing so that potential victims can protect themselves or watch for attacks vs keeping quiet so vendors can fix underlying bugs before very many attackers know of it. There are a LOT of factors that go into these decisions, it’s not as simple as “don’t spread harmful information”.
Another example, which I don’t know if it supports my position or yours: Tom Clancy published Debt of Honor in 1994, which included a near-decapitation of the US government by a pilot-turned-terrorist flying his 747 into the capital building. Only 7 years later, real-life terrorists did something very similar. We immediately instituted systems to prevent repeats (and a bunch of systems that added irritation and did not protect anything), and there have been no copycats for 17 years.
It seems implementing systems that prevent hijacking of planes is easier with how airports and plane travel work vs how much would need to change to stop vehicles being used in attacks. Seems similar to the debate over whether the Slaughterbots video and campaign to stop autonomous weapons will be successful. The supporters use nuclear weapons policy as the success story but it may not be the most useful comparison because nuclear weapons are much easier technology to restrict.
It is worth considering that information is easier to move now, and that there are groups dedicated to finding and implementing new strategies for attacks. I think it is more likely that we are in a ‘loose lips sink ships’ regime now than we were then.
From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.
Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.
There doesn’t seem to be a similar pipeline for non-computer security threats.
Even for responsible infosec disclosure, it’s always a limited time, and there are lots of cases of publishing before a fix, if the vendors are not cooperating, or if the exploit gains attention through other channels. And even when it works, it’s mostly limited to fairly concrete proven vulnerabilities—there’s no embargo on wild, unproven ideas.
There doesn’t seem to be a similar pipeline for non-computer security threats.
Nor is there anyone likely to be able to help during the period of limited-disclosure, nor are most of the ideas concrete and actionable enough to expect it to do any good to publish to a limited audience before full disclosure.
The non-computer analog for bug fixes is product recalls. I point out that recalling defective hardware is hideously expensive; so much so that even after widespread public outcry, it often requires lawsuits or government intervention to motivate action.
As for the reporting channel, my guess is warranty claims? Physical things come with guarantees that they will not fail in unexpected ways. Although I notice that there isn’t much of a parallel for bug searches at the physical level.
If I were Tom Clancy I hope that I would not have published Debt of Honor. I don’t know whether terrorists were inspired by it, but at least for me it’s pretty clearly in the “not worth the risk” category.
In some respects the 9/11 attacks can be considered similar to the Tylenol incident (though obviously much more devastating) - an incident took place using a method that had been theoretically viable for a long time, prompting immediate corrective action.
One of the reasons those attacks were so successful is that air hijacks were relatively common, but most led “only” to hostage scenarios, demands for the release of political prisoners, etc—in point of fact the standard protocol was to cooperate with hijackers, and as Wikipedia says “often, during the epidemic of skyjackings in the late 1960s and early 1970s, the end result was an inconvenient but otherwise harmless trip to Cuba for the passengers.” Post-9/11, hijacks began being taken much more seriously.
(There were actually many terrorist attempts against airplanes in the time shortly after 9/11, though most were not hijack attempts—the infamous “shoe bomber” who attempted to destroy an aircraft in flight a few months later, only to be beaten and captured by other passengers, was maybe the most well known.)
I hope that I would not have published Debt of Honor.
There have been an enormous number of books, movies, etc with various forms of realistic plots. Are you saying this genre shouldn’t exist, that authors should make sure their plots are not realistic, or that there’s something unusual about this plot in particular that should have kept Clancy from publishing?
If I were Tom Clancy I hope that I would not have published Debt of Honor. I don’t know whether terrorists were inspired by it, but at least for me it’s pretty clearly in the “not worth the risk” category.
I get the argument but then I’m wondering where it stops? Don’t direct A Clockwork Orange because there’s a high likelihood that copycat murders will happen? Stop production on all things where someone might copy something harmful?
I think I would have published. A potentially-productive question is “With 7 years warning, why did bad guys try it before good guys prevented it?”. Was is a question of misaligned incentives (where the good guys effectively let it happen because the public punishes for inconvenience), or different estimates of success (the good guys thought it’d never happen, although it was (in retrospect) extremely effective?
Keeping ideas/information obscure is unlikely to work—the more motivated side is going to get it first, and it’s likely to be more effective the first time it’s used than if many people anticipated it (or at least understood the vulnerability).
“With 7 years warning, why did bad guys try it before good guys prevented it?”
This came up around 9/11. Good guys have too too many things to prevent to focus on any random hypothetical more than any other hypothetical. Gwern has some writing on terrorism and it not being about terror. I leave it up to the reader to find the link.
Upvoted for being an important idea, but I actually disagree with the advice. The relationship of ideas to action is exceedingly complex, and I strongly doubt (but do not know how to test) that the idea simply hadn’t occurred to someone who wanted attention through harm.
I find it much more likely that there’s a large uncertainty in the effectiveness (in terms of attention to be had) of uncommon attacks, and when it’s not already in the public eye, it’s known but not considered as a reasonable mechanism. Much like cryogenics is weird and uncertain, even for people who would like to be revived, poisoning medicine (dangerous idea: why only medicine, not other foods?) was weird and uncertain until it had been shown to work.
I suspect the dangerous information is that it has succeeded at least once, and gotten a lot of press attention. This information is much harder (and less desirable) to suppress.
In the software world, ideas are rampant and cheap. Execution of the correct idea is the path to success. I expect it’s similar as a terrorist, except there are way fewer people to help you choose, refine, and change your ideas, so you only get one shot (as it were).
I also note a similarity to the disclosure debate about computer vulnerabilities—there’s a tension between publishing so that potential victims can protect themselves or watch for attacks vs keeping quiet so vendors can fix underlying bugs before very many attackers know of it. There are a LOT of factors that go into these decisions, it’s not as simple as “don’t spread harmful information”.
Another example, which I don’t know if it supports my position or yours: Tom Clancy published Debt of Honor in 1994, which included a near-decapitation of the US government by a pilot-turned-terrorist flying his 747 into the capital building. Only 7 years later, real-life terrorists did something very similar. We immediately instituted systems to prevent repeats (and a bunch of systems that added irritation and did not protect anything), and there have been no copycats for 17 years.
It seems implementing systems that prevent hijacking of planes is easier with how airports and plane travel work vs how much would need to change to stop vehicles being used in attacks. Seems similar to the debate over whether the Slaughterbots video and campaign to stop autonomous weapons will be successful. The supporters use nuclear weapons policy as the success story but it may not be the most useful comparison because nuclear weapons are much easier technology to restrict.
It is worth considering that information is easier to move now, and that there are groups dedicated to finding and implementing new strategies for attacks. I think it is more likely that we are in a ‘loose lips sink ships’ regime now than we were then.
From an infosec point of view, you tend to rely on responsible disclosure. That is you tell people that will be most affected or that can solve the problem for other people, they can create counter measures and then you release those counter measures to everyone else (which gives away the vulnerability as well), who should be in a position to quickly update/patch.
Otherwise you are relying on security via obscurity. People may be vulnerable and not know it.
There doesn’t seem to be a similar pipeline for non-computer security threats.
Even for responsible infosec disclosure, it’s always a limited time, and there are lots of cases of publishing before a fix, if the vendors are not cooperating, or if the exploit gains attention through other channels. And even when it works, it’s mostly limited to fairly concrete proven vulnerabilities—there’s no embargo on wild, unproven ideas.
Nor is there anyone likely to be able to help during the period of limited-disclosure, nor are most of the ideas concrete and actionable enough to expect it to do any good to publish to a limited audience before full disclosure.
The non-computer analog for bug fixes is product recalls. I point out that recalling defective hardware is hideously expensive; so much so that even after widespread public outcry, it often requires lawsuits or government intervention to motivate action.
As for the reporting channel, my guess is warranty claims? Physical things come with guarantees that they will not fail in unexpected ways. Although I notice that there isn’t much of a parallel for bug searches at the physical level.
If I were Tom Clancy I hope that I would not have published Debt of Honor. I don’t know whether terrorists were inspired by it, but at least for me it’s pretty clearly in the “not worth the risk” category.
In some respects the 9/11 attacks can be considered similar to the Tylenol incident (though obviously much more devastating) - an incident took place using a method that had been theoretically viable for a long time, prompting immediate corrective action.
One of the reasons those attacks were so successful is that air hijacks were relatively common, but most led “only” to hostage scenarios, demands for the release of political prisoners, etc—in point of fact the standard protocol was to cooperate with hijackers, and as Wikipedia says “often, during the epidemic of skyjackings in the late 1960s and early 1970s, the end result was an inconvenient but otherwise harmless trip to Cuba for the passengers.” Post-9/11, hijacks began being taken much more seriously.
(There were actually many terrorist attempts against airplanes in the time shortly after 9/11, though most were not hijack attempts—the infamous “shoe bomber” who attempted to destroy an aircraft in flight a few months later, only to be beaten and captured by other passengers, was maybe the most well known.)
There have been an enormous number of books, movies, etc with various forms of realistic plots. Are you saying this genre shouldn’t exist, that authors should make sure their plots are not realistic, or that there’s something unusual about this plot in particular that should have kept Clancy from publishing?
I get the argument but then I’m wondering where it stops? Don’t direct A Clockwork Orange because there’s a high likelihood that copycat murders will happen? Stop production on all things where someone might copy something harmful?
I think I would have published. A potentially-productive question is “With 7 years warning, why did bad guys try it before good guys prevented it?”. Was is a question of misaligned incentives (where the good guys effectively let it happen because the public punishes for inconvenience), or different estimates of success (the good guys thought it’d never happen, although it was (in retrospect) extremely effective?
Keeping ideas/information obscure is unlikely to work—the more motivated side is going to get it first, and it’s likely to be more effective the first time it’s used than if many people anticipated it (or at least understood the vulnerability).
This came up around 9/11. Good guys have too too many things to prevent to focus on any random hypothetical more than any other hypothetical. Gwern has some writing on terrorism and it not being about terror. I leave it up to the reader to find the link.