I was dismayed that Pei has such a poor opinion of the Singularity Institute’s arguments, and that he thinks we are not making a constructive contribution. If we want the support of the AGI community, it seems we’ll have to improve our communication.
It might be more worthwhile to try to persuade graduate students and undergraduates who might be considering careers in AI research, since the personal cost associated with deciding that AI research is dangerous is lower for them. So less motivated cognition.
Correct me if I’m wrong, but isn’t it the case that you wish to decelerate AI research ? In this case, you are in fact making a destructive contribution—from the point of view of someone like Wang, who is interested in AI research. I see nothing odd about that.
It sounds as though you mean decelerating the bits that he is interested in and accelerating the bits that the SI is interested in. Rather as though the SI is after a bigger slice of the pie.
If you slow down capability research, then someone else is likely to become capable before you—in which case, your “goal management research” may not be so useful. How confident are you that this is a good idea?
If we want the support of the AGI community, it seems we’ll have to improve our communication.
Yes, this does seem to be an issue. When people in academia write something like “The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn’t written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.
Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
I think you must first consider simpler possibility that SIAI actually has a very bad argument, and isn’t making any positive contribution to saving mankind from anything. When you have very good reasons to think it isn’t so (high iq test scores don’t suffice), very well verified given all the biases, you can consider possibility that it is miscommunication.
I was dismayed that Pei has such a poor opinion of the Singularity Institute’s arguments, and that he thinks we are not making a constructive contribution. If we want the support of the AGI community, it seems we’ll have to improve our communication.
It might be more worthwhile to try to persuade graduate students and undergraduates who might be considering careers in AI research, since the personal cost associated with deciding that AI research is dangerous is lower for them. So less motivated cognition.
“It is difficult to get a man to understand something, when his salary depends upon his not understanding it”—Upton Sinclair
Good point!
Correct me if I’m wrong, but isn’t it the case that you wish to decelerate AI research ? In this case, you are in fact making a destructive contribution—from the point of view of someone like Wang, who is interested in AI research. I see nothing odd about that.
To decelerate AI capability research and accelerate AI goal management research. An emphasis shift, not a decrease. An increase would be in order.
It sounds as though you mean decelerating the bits that he is interested in and accelerating the bits that the SI is interested in. Rather as though the SI is after a bigger slice of the pie.
If you slow down capability research, then someone else is likely to become capable before you—in which case, your “goal management research” may not be so useful. How confident are you that this is a good idea?
Yes, this does seem to be an issue. When people in academia write something like “The “friendly AI” approach advocated by Eliezer Yudkowsky has several serious conceptual and theoretical problems, and is not accepted by most AGI researchers. The AGI community has ignored it, not because it is indisputable, but because people have not bothered to criticize it.”, the communication must be at an all-time low.
Well, of course. Imagine Eliezer would have founded SI to deal with physical singularities as a result of high-energy physics experiments. Would anything that he has written convince the physics community to listen to him? No, because he simply hasn’t written enough about physics to either convince them that he knows what he is talking about or to make his claims concrete enough to be critized in the first place.
Yet he has been more specific when it comes to physics than AI. So why would the AGI community listen to him?
I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
(Okay, probably not a very typical case.)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
Edited citations to the original comment.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
Sure, that too.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
Hey Tim, you can create another AI safety nonprofit to make sure things happen that way!
;-)
Seriously, I will donate!
Poor analogy. Physicists considered this possibility carefully and came up a superfluity of totally airtight reasons to dismiss the concern.
I think you must first consider simpler possibility that SIAI actually has a very bad argument, and isn’t making any positive contribution to saving mankind from anything. When you have very good reasons to think it isn’t so (high iq test scores don’t suffice), very well verified given all the biases, you can consider possibility that it is miscommunication.