The Al Gore hypocrisy claim is misleading. Global warming changes the equilibrium sea level, but it takes many centuries to reach that equilibrium (glaciers can’t melt instantly, etc). So climate change activists like to say that there will be sea level rises of hundreds of feet given certain emissions pathways, but neglect to mention that this won’t happen in the 21st century. So there’s no contradiction between buying oceanfront property only slightly above sea level and claiming that there will be large eventual sea level increases from global warming.
The thing to critique would be the misleading rhetoric that gives the impression (by mentioning that the carbon emissions by such and such a date will be enough to trigger sea level rises, but not mentioning the much longer lag until those rises fully occur) that the sea level rises will happen mostly this century.
Regarding Hughes’ point, even if one thinks that an activity has harmful effects, that doesn’t mean that a campaign to ban it won’t do more harm than good. That would essentially be making bitter enemies of several of the groups (AI academia and industry) with the greatest potential to reduce risk, and discredit the whole idea of safety measures. Far better to develop better knowledge and academic analysis around the issues, or to mobilize resources towards positive safety measures.
Regarding your quoted comment, it seems crazy. The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
“Ticking time bomb cases” are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It’s dangerous to use such hypotheticals as a way to justify legalization of abuse
in realistic cases. No one can expect an act of violence to “disable Skynet” (if such a thing was known to exist, it would be too late anyway), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
Back in July I’ve written this as a response to Hughes’ comment:
Keep your friends close...maybe they just want to keep the AI crowd as close together as possible. Making enemies wouldn’t be a smart idea either, as the ‘K-type S^’ subgroup would likely retreat from further information disclosure. Making friends with them might be the best idea.
An explanation of the rather calm stance regarding a potential giga-death or living hell event would be to keep a low profile until acquiring more power.
I’m aware of that argument and also the other things you mentioned and don’t think they are reasonable. I’ve written about it before but deleted my comments as they might be very damaging to the SIAI. I’ll just say that there is no argument against active measures if you seriously believe that certain people or companies pose existential risks. Hughes’ comment just highlights an important observation, that doesn’t mean I support the details.
Regarding Al Gore: What it highlights is how what the SIAI says and does is as misleading as what Al Gores does. It doesn’t mean that it is irrational but that people draw conclusions like the one Hughes’ did based on this superficially contradictory behavior.
The Al Gore hypocrisy claim is misleading. Global warming changes the equilibrium sea level, but it takes many centuries to reach that equilibrium (glaciers can’t melt instantly, etc). So climate change activists like to say that there will be sea level rises of hundreds of feet given certain emissions pathways, but neglect to mention that this won’t happen in the 21st century. So there’s no contradiction between buying oceanfront property only slightly above sea level and claiming that there will be large eventual sea level increases from global warming.
The thing to critique would be the misleading rhetoric that gives the impression (by mentioning that the carbon emissions by such and such a date will be enough to trigger sea level rises, but not mentioning the much longer lag until those rises fully occur) that the sea level rises will happen mostly this century.
Regarding Hughes’ point, even if one thinks that an activity has harmful effects, that doesn’t mean that a campaign to ban it won’t do more harm than good. That would essentially be making bitter enemies of several of the groups (AI academia and industry) with the greatest potential to reduce risk, and discredit the whole idea of safety measures. Far better to develop better knowledge and academic analysis around the issues, or to mobilize resources towards positive safety measures.
Regarding your quoted comment, it seems crazy. The Unabomber attacked innocent people in a way that did not slow down technology advancement and brought ill repute to his cause. The Luddites accomplished nothing. Some criminal nutcase hurting people in the name of preventing AI risks would just stigmatize his ideas, and bring about impenetrable security for AI development in the future without actually improving the odds of a good outcome (when X can make AGI, others will be able to do so then, or soon after).
“Ticking time bomb cases” are offered to justify legalizing torture, but they essentially never happen: there is always vastly more uncertainty and lower expected benefits. It’s dangerous to use such hypotheticals as a way to justify legalization of abuse in realistic cases. No one can expect an act of violence to “disable Skynet” (if such a thing was known to exist, it would be too late anyway), and if a system could be shown to be quite likely dangerous, one would call the police, regulators, and politicians.
Back in July I’ve written this as a response to Hughes’ comment:
I’m aware of that argument and also the other things you mentioned and don’t think they are reasonable. I’ve written about it before but deleted my comments as they might be very damaging to the SIAI. I’ll just say that there is no argument against active measures if you seriously believe that certain people or companies pose existential risks. Hughes’ comment just highlights an important observation, that doesn’t mean I support the details.
Regarding Al Gore: What it highlights is how what the SIAI says and does is as misleading as what Al Gores does. It doesn’t mean that it is irrational but that people draw conclusions like the one Hughes’ did based on this superficially contradictory behavior.