The problem with this argument is that there are costs to causing things to happen via spreading misinformation; you’re essentially biasing other people doing expected utility evaluations by providing inaccurate data to them. People drawing conclusions based on inaccurate data would have other effects; in this example, some people would avoid flying, suffering additional costs. People are also likely to continue to support the goals the conspiracy theory pushes towards past the point that they actually would have the greater expected utility without the conspiracy theory’s influence on probability estimates, causing bad decisions later.
It’s possible that after factoring all this in, it could be worthwhile in some cases. But given the costs involved I think, prior to any deeper study of the situation, it would be more likely harmful than beneficial in this specific example.
I actually agree with most of your argument and, probably, with a conclusion. I just wanted to show shades of gray omitted in the original post.
Actually, I can restate the argument I quoted to be technically true. Or I can restate it as a full-blown conspiracy theory. They can still be made quite close from the point of view of what actually happens, though. I think that the lightest reframing are net-positive perspective changes (but somewhat risky), by the way.
Scenario A. Aircraft manufacturers know full well what is needed to prevent most accidents—both ones now classified as technical failures caused by bad maintenance and those claimed to be human error that are sometimes actually technical malfunctions (succesfully covered up). They don’t implement many of the known-to-them safety features because of the cost, and sometimes deliberately omit cheap safety features to increase renewal of aircraft fleet. They reduce robustness slowly over time in hope that public would get fed up with disasters and require “something to be done”. They know already how to implement things that would be required by new statutes but implementing state-mandated safety features will be a good excuse to increase prices a lot—with a big increase in profit margins.
Scenario B. Technically, the very ability of a plane to be turned into collision course with a well-known big non-moving object (be it a mountain, WTC or anything) is a failure of safety measures and navigation. It should be possible to deliver such protection, and if it is not possible yet, it should be the top priority, way above “Internet on board” or such things. If considering 9/11 a navigation failure makes you want not to fly—well, there are many causes that ultimately lead to risky manoeuvrs. If they still can lead to a disaster in the XXI century, shifting blame doesn’t help—you either accept the risk or not.
Scenario C. http://en.wikipedia.org/wiki/2002_%C3%9Cberlingen_Mid-Air_Collision illustrates more than 9/11. There is a coordination problem: safety protocols that would work fine on their own sometimes lead to a disaster when mixed (the collision was related to a mistake by dispatcher; Tu154 pilot knew that automatic collision prevention and dispatcher commands contradict but Russian rules give precedence to dispatcher and European rules give precedence to automatic system). Also, the transportation market is such that 1-in-N chance of death being replaced with 1-in-2N isn’t easy to prove and it doesn’t lead to people easily paying 50$ more for the flight.
The problem with this argument is that there are costs to causing things to happen via spreading misinformation; you’re essentially biasing other people doing expected utility evaluations by providing inaccurate data to them. People drawing conclusions based on inaccurate data would have other effects; in this example, some people would avoid flying, suffering additional costs. People are also likely to continue to support the goals the conspiracy theory pushes towards past the point that they actually would have the greater expected utility without the conspiracy theory’s influence on probability estimates, causing bad decisions later.
It’s possible that after factoring all this in, it could be worthwhile in some cases. But given the costs involved I think, prior to any deeper study of the situation, it would be more likely harmful than beneficial in this specific example.
I actually agree with most of your argument and, probably, with a conclusion. I just wanted to show shades of gray omitted in the original post.
Actually, I can restate the argument I quoted to be technically true. Or I can restate it as a full-blown conspiracy theory. They can still be made quite close from the point of view of what actually happens, though. I think that the lightest reframing are net-positive perspective changes (but somewhat risky), by the way.
Scenario A. Aircraft manufacturers know full well what is needed to prevent most accidents—both ones now classified as technical failures caused by bad maintenance and those claimed to be human error that are sometimes actually technical malfunctions (succesfully covered up). They don’t implement many of the known-to-them safety features because of the cost, and sometimes deliberately omit cheap safety features to increase renewal of aircraft fleet. They reduce robustness slowly over time in hope that public would get fed up with disasters and require “something to be done”. They know already how to implement things that would be required by new statutes but implementing state-mandated safety features will be a good excuse to increase prices a lot—with a big increase in profit margins.
Scenario B. Technically, the very ability of a plane to be turned into collision course with a well-known big non-moving object (be it a mountain, WTC or anything) is a failure of safety measures and navigation. It should be possible to deliver such protection, and if it is not possible yet, it should be the top priority, way above “Internet on board” or such things. If considering 9/11 a navigation failure makes you want not to fly—well, there are many causes that ultimately lead to risky manoeuvrs. If they still can lead to a disaster in the XXI century, shifting blame doesn’t help—you either accept the risk or not.
Scenario C. http://en.wikipedia.org/wiki/2002_%C3%9Cberlingen_Mid-Air_Collision illustrates more than 9/11. There is a coordination problem: safety protocols that would work fine on their own sometimes lead to a disaster when mixed (the collision was related to a mistake by dispatcher; Tu154 pilot knew that automatic collision prevention and dispatcher commands contradict but Russian rules give precedence to dispatcher and European rules give precedence to automatic system). Also, the transportation market is such that 1-in-N chance of death being replaced with 1-in-2N isn’t easy to prove and it doesn’t lead to people easily paying 50$ more for the flight.