There’s a lot of talk about “existential risk” on this site—perhaps because the site was started by a group who are hoping to SAVE HUMANITY from the END OF THE WORLD!
However, rarely does the analaysis on this site touch on what seem to me to be fairly major underlying issues:
How much is the “existential risk” movement being used as a marketing scam—whose primary purpose is to move power and funds from the paranoid to the fear-mongers?
What would the overall effect of widespread fear of the END OF THE WORLD be? Does it make problems more likely—or less likely—if people actually think that it is plausible that there may be NO TOMORROW? Do they fight the end? Fight each other? Get depressed? Rape and pillage? Get drunk? What is actually most likely to happen?
If the END OF THE WORLD turns out to indeed be mostly an unpleasant infectious meme that spreads through exploiting people’s fear and paranoia using a superstimulus, helped along by those who profit financially from the phenomenon—then what would be the best way to disinfect the planet?
Here are a couple of recent posts from Bob Mottram on the topic:
How much is the “existential risk” movement being used as a marketing scam—whose primary purpose is to move power and funds from the paranoid to the fear-mongers?
I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were “marketing scam[s]” taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people’s doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about “evil robots” and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!
(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There’s no law that says the world can’t end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)
Hell, even if some group wanted to make big money off of predicting AI
doom in particular, they could do it a lot better than SIAI does [...]
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.
It’s true that the idea that the world might end is a meme with an interesting history and interesting properties. I’m not sure those interesting properties shed much light on whether the meme is true or not.
If you replace DOOM with GOD the memetic analysis seems quite illuminating to me.
Those who argue against GOD frequently mention the memetic analysis—e.g. see The God Delusion and Breaking The Spell—whereas the GOD SQUAD rarely do. It seems pretty obvious that that is because the memetic analysis hinders the propagation of their message.
You see the same thing here. Nobody is interested in discussing the possibility that their brains have been hijacked by the DOOM viurs. That may well be because their brains have been hijacked by the DOOM virus—and recognition of that fact might hinder the propagation of the DOOM message.
There’s a lot of talk about “existential risk” on this site—perhaps because the site was started by a group who are hoping to SAVE HUMANITY from the END OF THE WORLD!
DOOM is an ancient viral phenomenon, which has been the subject of many a movie, documentary and sociological evaluation—e.g. The End of the World Cult and http://www.2012movie.org/
For more details see:
http://en.wikipedia.org/wiki/Doomsday_cult
http://en.wikipedia.org/wiki/Apocalypticism
The END OF THE WORLD is also one of the most often repeated inaccurate predictions of all time. The “millennium and end-of-the- world-as-we-know-it prophecies” site lists the failed predictions in a big table.
However, rarely does the analaysis on this site touch on what seem to me to be fairly major underlying issues:
How much is the “existential risk” movement being used as a marketing scam—whose primary purpose is to move power and funds from the paranoid to the fear-mongers?
What would the overall effect of widespread fear of the END OF THE WORLD be? Does it make problems more likely—or less likely—if people actually think that it is plausible that there may be NO TOMORROW? Do they fight the end? Fight each other? Get depressed? Rape and pillage? Get drunk? What is actually most likely to happen?
If the END OF THE WORLD turns out to indeed be mostly an unpleasant infectious meme that spreads through exploiting people’s fear and paranoia using a superstimulus, helped along by those who profit financially from the phenomenon—then what would be the best way to disinfect the planet?
Here are a couple of recent posts from Bob Mottram on the topic:
http://streebgreebling.blogspot.com/2009/08/doom-as-psychological-phenomena.html
http://streebgreebling.blogspot.com/2010/08/doomerster-status.html
Scientific American goes so far as to blame “vanity” for the phenomenon:
“Imagining the end of the world is nigh makes us feel special.”
I think Eliezer once pointed out that if cryonics were a scam, it would have much better marketing and be much more popular. A similar principle applies here: if organizations like SIAI and FHI were “marketing scam[s]” taking advantage of the profitable nature of predicting apocalypses, a lot more people would know about them (and there would be less of a surprising concentration of smart people supporting them). An orgazation interested in exploiting gullible people’s doomsday biases would not look like SIAI or FHI. Hell, even if some group wanted to make big money off of predicting AI doom in particular, they could do it a lot better than SIAI does: people have all these anthropomorphic intuitions about “evil robots” and there are all these scary pop-culture memes like Skynet and the Matrix, and SIAI foolishly goes around dispelling these instead of using them to their lucrative advantage!
(Also, if I may paraphrase Great Leader one more time: this is a literary criticism, not a scientific one. There’s no law that says the world can’t end, so if someone says that it might actually end at some point for reasons x, y, and z, you have to address reasons x, y, and z; pointing out stylistic/thematic but non-technical similarities to previous failed predictions is not a valid counterargument.)
Presumably that was a joke. That is an illogical argument with holes in it big enough to drive a truck through.
People have tried much the same plan before, you know. Hugo de Garis was using much the same fear-mongering marketing strategy to draw attention to himself before the Singularity Institute came along.
Hugo de Garis predicts a future war between AI supporters and AI opponents that will cause billions of death. That is a highly-inflammatory prediction, because it fits neatly with human instincts about ideological conflicts and science-fiction-style technology.
The prediction that AIs will be dangerously indifferent to our existence unless we take great care to make them otherwise is not an appeal to human intuitions about conflict or important causes. Eliezer could talk about uFAI as if it were approximately like Skynet and draw substantially more (useless) attention, while still advocating for his preferred course of research. That he has not done so is evidence that he is more concerned with representing his beliefs accurately than attracting media attention.
People have tried that too. In 2004 Kevin Warwick published “March of the Machines”. It was an apocalyptic view of what the future holds for mankind—with the superior machines out-competing the obsolete humans—crushing them like ants.
Obviously some DOOM mongers will want their vision of DOOM to be as convincing and realistic as possible. The more obviously fake the visions of DOOM are, the fewer believe—and the poorer the associated marketing. Making DOOM seem as plausible as possible is a fundamental part of the DOOM monger’s trade.
The Skynet niche, the Matrix niche, the 2012 niche, the “earth fries” niche, the “alien invasion” niche, the “asteroid impact” niche, the “nuclear apocalypse” niche, and the “deadly plague” niche are all already being exploited by other DOOM mongers—in their own way. Humans just love a good disaster, you see.
It’s true that the idea that the world might end is a meme with an interesting history and interesting properties. I’m not sure those interesting properties shed much light on whether the meme is true or not.
If you replace DOOM with GOD the memetic analysis seems quite illuminating to me.
Those who argue against GOD frequently mention the memetic analysis—e.g. see The God Delusion and Breaking The Spell—whereas the GOD SQUAD rarely do. It seems pretty obvious that that is because the memetic analysis hinders the propagation of their message.
You see the same thing here. Nobody is interested in discussing the possibility that their brains have been hijacked by the DOOM viurs. That may well be because their brains have been hijacked by the DOOM virus—and recognition of that fact might hinder the propagation of the DOOM message.