If we want the support of the AGI community, it seems we’ll have to improve our communication.
Yes, this does seem to be an issue. When people in academia write something like “The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn’t written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.
Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
Yes, this does seem to be an issue. When people in academia write something like “The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn’t written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.
Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
(Okay, probably not a very typical case.)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
Edited citations to the original comment.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
Sure, that too.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
Hey Tim, you can create another AI safety nonprofit to make sure things happen that way!
;-)
Seriously, I will donate!
Poor analogy. Physicists considered this possibility carefully and came up a superfluity of totally airtight reasons to dismiss the concern.