By the time GAI becomes remotely viable, a candidate supporting a ban of GAI will have nearly universal support.
It is already “remotely viable” in the sense that when I thought hard about assigning probabilities to AGI timelines, I had to put a few percent on it happening in the next decade.
Your ideas about the interaction of contemporary political processes and AGI seem wrong to me. You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells.
“You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells.”
That’s one possible reason. Another possible reason is that AI is not a threat worth caring about, yet. AI may not induce a gut reaction, but what explains the lack of concern about AI among mainstream scientists?
But stem cell research is much more prominent in that it is producing notable direct applications or very close to it. It also isn’t just a yuck factor (although that’s certainly one part), in many different moral systems, stem cells research produced serious moral qualms. AI may very well trigger some similar issues if it becomes more viable.
Probabilistic AI has more apps than stem cells do right now. For example, google. But the point I am making is that an application of a technology is a logical factor, whereas people actually respond to emotional factors, like whether it breaks taboos that go back to the stone age. For example, anything that involves sex, flesh, blood, overtones of bestiality, overtones of harm to children, trading a sacred good for an unsacred one etc.
The ideal technology for people to want to ban would involve harvesting a foetus that was purchased from a hooker, then hybridizing it with a pig foetus, then injecting the resultant cells into the gonads of little kids. That technology would get nuked by the public.
The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand, has never happened before, involves only nonphysical harards like information, and has nothing to do with flesh, sex or anything disgusting or with fire, sharp objects or other natural disasters.
“The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand”
I don’t think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.
The second you get some credible people saying that AI is a threat
It’s already happened—didn’t you see the media about Stephen Hawking saying AI could be dangerous? And Bill Joy?
The general point I am trying to make is that the general public are not rational in terms of collective epistemology. They don’t respond to complex logical and quantitative analyses. Yes, Joy and Hawking did say that AI is a risk, but there are many risks, including the risk that vaccinations cause autism and the risk that foreign workers will take all our jobs. The public does not understand the difference between these risks.
Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)
I would say that poor social epistemology and, poor social axiology and mediocre individual rationality are the big culprits that prevent many scientists from taking AI risk seriously.
By “social axiology” I mean that our society is just not consequentialist enough. We don’t solve problems that way, and even the debate about global warming is not really dealing well with the problem of how to quantify risks under uncertainty. We don’t try to improve the world in a systematic, rational way; rather it is done piecemeal.
There may be an issue here about what we define as AI. For example, I would not see what Google does as AI but rather as harvesting human intelligence. The lines here may be blurry are hard to define.
Could someone explain why this comment got modded down? I don’t see any errors in reasoning or other issues. (Was the content level too low for the desired signal/noise ratio?)
Google uses exactly the techniques from the probabilistic revolution, namely machine learning, which is the relevant fact. Whether you call it AI is not relevant to the point at issue as far as I can see.
Do you have a citation for Google using machine learning in any substantial scale? The most basic of the Google algorithms is PageRank which isn’t a machine learning algorithm by most definitions of that term.
The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand, has never happened before, involves only nonphysical harards like information, and has nothing to do with flesh, sex or anything disgusting or with fire, sharp objects or other natural disasters.
Yes, but these are precisely the dangers humans should certainly not worry about to begin with.
It is already “remotely viable” in the sense that when I thought hard about assigning probabilities to AGI timelines, I had to put a few percent on it happening in the next decade.
Your ideas about the interaction of contemporary political processes and AGI seem wrong to me. You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells.
“You might want to go back to basics and think about how politics, public opinion and the media operate, for example that they had little opinion on the hugely important probabilistic revolution in AI over the last 15 years, but spilled loads of ink over stem cells.”
And why is that?
Yuck factor for stem cells but not for probabilistic AI.
That’s one possible reason. Another possible reason is that AI is not a threat worth caring about, yet. AI may not induce a gut reaction, but what explains the lack of concern about AI among mainstream scientists?
But stem cell research is much more prominent in that it is producing notable direct applications or very close to it. It also isn’t just a yuck factor (although that’s certainly one part), in many different moral systems, stem cells research produced serious moral qualms. AI may very well trigger some similar issues if it becomes more viable.
Probabilistic AI has more apps than stem cells do right now. For example, google. But the point I am making is that an application of a technology is a logical factor, whereas people actually respond to emotional factors, like whether it breaks taboos that go back to the stone age. For example, anything that involves sex, flesh, blood, overtones of bestiality, overtones of harm to children, trading a sacred good for an unsacred one etc.
The ideal technology for people to want to ban would involve harvesting a foetus that was purchased from a hooker, then hybridizing it with a pig foetus, then injecting the resultant cells into the gonads of little kids. That technology would get nuked by the public.
The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand, has never happened before, involves only nonphysical harards like information, and has nothing to do with flesh, sex or anything disgusting or with fire, sharp objects or other natural disasters.
“The ideal dangerous technology for people to not give a shit about banning would involve a theoretical threat which is hard to understand”
I don’t think The Terminator was hard to understand. The second you get some credible people saying that AI is a threat, the media reaction is going to be overexcessive, as it always is.
It’s already happened—didn’t you see the media about Stephen Hawking saying AI could be dangerous? And Bill Joy?
The general point I am trying to make is that the general public are not rational in terms of collective epistemology. They don’t respond to complex logical and quantitative analyses. Yes, Joy and Hawking did say that AI is a risk, but there are many risks, including the risk that vaccinations cause autism and the risk that foreign workers will take all our jobs. The public does not understand the difference between these risks.
Thanks; I was mistaken. Would you say, then, that mainstream scientists are similarly irrational? (The main comparison I have in mind throughout this section, by the way, is global warming.)
I would say that poor social epistemology and, poor social axiology and mediocre individual rationality are the big culprits that prevent many scientists from taking AI risk seriously.
By “social axiology” I mean that our society is just not consequentialist enough. We don’t solve problems that way, and even the debate about global warming is not really dealing well with the problem of how to quantify risks under uncertainty. We don’t try to improve the world in a systematic, rational way; rather it is done piecemeal.
There may be an issue here about what we define as AI. For example, I would not see what Google does as AI but rather as harvesting human intelligence. The lines here may be blurry are hard to define.
You make a good point about older taboos.
Could someone explain why this comment got modded down? I don’t see any errors in reasoning or other issues. (Was the content level too low for the desired signal/noise ratio?)
Google uses exactly the techniques from the probabilistic revolution, namely machine learning, which is the relevant fact. Whether you call it AI is not relevant to the point at issue as far as I can see.
Do you have a citation for Google using machine learning in any substantial scale? The most basic of the Google algorithms is PageRank which isn’t a machine learning algorithm by most definitions of that term.
Adwords uses more core ML techniques
Yes, but these are precisely the dangers humans should certainly not worry about to begin with.