Ben Goertzel also says “If one fully accepts SIAI’s Scary Idea, then one should not work on practical AGI projects...” Here is another recent quote that is relevant:
What I find a continuing source of amazement is that there is a subculture of people half of whom believe that AI will lead to the solving of all mankind’s problems (which me might call Kurzweilian S^) and the other half of which is more or less certain (75% certain) that it will lead to annihilation. Lets call the latter the SIAI S^.
Yet you SIAI S^ invite these proponents of global suicide by AI, K-type S^, to your conferences and give them standing ovations.
And instead of waging desperate politico-military struggle to stop all this suicidal AI research you cheerlead for it, and focus your efforts on risk mitigation on discussions of how a friendly god-like AI could save us from annihilation.
You are a deeply schizophrenic little culture, which for a sociologist like me is just fascinating.
But as someone deeply concerned about these issues I find the irrationality of the S^ approach to a-life and AI threats deeply troubling. -- James J. Hughes (existential.ieet.org mailing list, 2010-07-11)
It is impossible for a rational person to both believe in imminent rise of sea levels and purchase ocean-front property.
It is reported that former Vice President Al Gore just purchased a villa in Montecito, California for $8.875 million. The exact address is not revealed, but Montecito is a relatively narrow strip bordering the Pacific Ocean. So its minimum elevation above sea level is 0 feet, while its overall elevation is variously reported at 50ft and 180ft. At the same time, Mr. Gore prominently sponsors a campaign and award-winning movie that warns that, due to Global Warming, we can expect to see nearby ocean-front locations, such as San Francisco, largely under water. The elevation of San Francisco is variously reported at 52ft up to high of 925ft.
Ask yourself, wouldn’t you fly a plane into a tower if that was the only way to disable Skynet? The difference between religion and the risk of uFAI makes it even more dangerous. This crowd is actually highly intelligent and their incentive based on more than fairy tales told by goatherders. And if dumb people are already able to commit large-scale atrocities based on such nonsense, what are a bunch of highly-intelligent and devoted geeks who see a tangible danger able and willing to do? More so as in this case the very same people who believe it are the ones who think they must act themselves because their God doesn’t even exist yet.
The Al Gore hypocrisy claim is misleading. Global warming changes the equilibrium sea level, but it takes many centuries to reach that equilibrium (glaciers can’t melt instantly, etc). So climate change activists like to say that there will be sea level rises of hundreds of feet given certain emissions pathways, but neglect to mention that this won’t happen in the 21st century. So there’s no contradiction between buying oceanfront property only slightly above sea level and claiming that there will be large eventual sea level increases from global warming.
The thing to critique would be the misleading rhetoric that gives the impression (by mentioning that the carbon emissions by such and such a date will be enough to trigger sea level rises, but not mentioning the much longer lag until those rises fully occur) that the sea level rises will happen mostly this century.
Regarding Hughes’ point, even if one thinks that an activity has harmful effects, that doesn’t mean that a campaign to ban it won’t do more harm than good. That would essentially be making bitter enemies of several of the groups (AI academia and industry) with the greatest potential to reduce risk, and discredit the whole idea of safety measures. Far better to develop better knowledge and academic analysis around the issues, or to mobilize resources towards positive safety measures.
Regarding your quoted comment, it seems crazy. The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
“Ticking time bomb cases” are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It’s dangerous to use such hypotheticals as a way to justify legalization of abuse
in realistic cases. No one can expect an act of violence to “disable Skynet” (if such a thing was known to exist, it would be too late anyway), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
Back in July I’ve written this as a response to Hughes’ comment:
Keep your friends close...maybe they just want to keep the AI crowd as close together as possible. Making enemies wouldn’t be a smart idea either, as the ‘K-type S^’ subgroup would likely retreat from further information disclosure. Making friends with them might be the best idea.
An explanation of the rather calm stance regarding a potential giga-death or living hell event would be to keep a low profile until acquiring more power.
I’m aware of that argument and also the other things you mentioned and don’t think they are reasonable. I’ve written about it before but deleted my comments as they might be very damaging to the SIAI. I’ll just say that there is no argument against active measures if you seriously believe that certain people or companies pose existential risks. Hughes’ comment just highlights an important observation, that doesn’t mean I support the details.
Regarding Al Gore: What it highlights is how what the SIAI says and does is as misleading as what Al Gores does. It doesn’t mean that it is irrational but that people draw conclusions like the one Hughes’ did based on this superficially contradictory behavior.
Ben Goertzel also says “If one fully accepts SIAI’s Scary Idea, then one should not work on practical AGI projects...” Here is another recent quote that is relevant:
Also reminds me of this:
I’ve highlighted the same idea before by the way:
The Al Gore hypocrisy claim is misleading. Global warming changes the equilibrium sea level, but it takes many centuries to reach that equilibrium (glaciers can’t melt instantly, etc). So climate change activists like to say that there will be sea level rises of hundreds of feet given certain emissions pathways, but neglect to mention that this won’t happen in the 21st century. So there’s no contradiction between buying oceanfront property only slightly above sea level and claiming that there will be large eventual sea level increases from global warming.
The thing to critique would be the misleading rhetoric that gives the impression (by mentioning that the carbon emissions by such and such a date will be enough to trigger sea level rises, but not mentioning the much longer lag until those rises fully occur) that the sea level rises will happen mostly this century.
Regarding Hughes’ point, even if one thinks that an activity has harmful effects, that doesn’t mean that a campaign to ban it won’t do more harm than good. That would essentially be making bitter enemies of several of the groups (AI academia and industry) with the greatest potential to reduce risk, and discredit the whole idea of safety measures. Far better to develop better knowledge and academic analysis around the issues, or to mobilize resources towards positive safety measures.
Regarding your quoted comment, it seems crazy. The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
“Ticking time bomb cases” are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It’s dangerous to use such hypotheticals as a way to justify legalization of abuse in realistic cases. No one can expect an act of violence to “disable Skynet” (if such a thing was known to exist, it would be too late anyway), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
Back in July I’ve written this as a response to Hughes’ comment:
I’m aware of that argument and also the other things you mentioned and don’t think they are reasonable. I’ve written about it before but deleted my comments as they might be very damaging to the SIAI. I’ll just say that there is no argument against active measures if you seriously believe that certain people or companies pose existential risks. Hughes’ comment just highlights an important observation, that doesn’t mean I support the details.
Regarding Al Gore: What it highlights is how what the SIAI says and does is as misleading as what Al Gores does. It doesn’t mean that it is irrational but that people draw conclusions like the one Hughes’ did based on this superficially contradictory behavior.