How much is the “existential risk” movement being used as a marketing scam—whose primary purpose is to move power and funds from the paranoid to the fear-mongers?
I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were “marketing scam[s]” taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people’s doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about “evil robots” and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!
(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There’s no law that says the world can’t end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)
Hell, even if some group wanted to make big money off of predicting AI
doom in particular, they could do it a lot better than SIAI does [...]
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.
I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were “marketing scam[s]” taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people’s doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about “evil robots” and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!
(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There’s no law that says the world can’t end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)
Presumably that was a joke. That is an illogical argument with holes in it big enough to drive a truck through.
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.