I wouldn’t be as worried if they took it upon themselves to study AI risk independently, but rather than “not listen to Eliezer”, the actual event seems to be “not pay attention to AI risks” as a whole.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
Think about it this way. There are a handful of people like Jürgen Schmidhuber who share SI’s conception of AGI and its potential. But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
One neuroscientist thought about it for a while, then said “yes, you’re probably right”. Then he co-authored with me a paper touching upon that topic. :-)
(Okay, probably not a very typical case.)
Awesome reply. Which of your papers around this subject is the one with the co-author? (ie. Not so much ‘citation needed’ as ‘citation would have really powered home the point there!’)
Edited citations to the original comment.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
Sure, that too.
Well, that happening doesn’t seem terribly likely. That might be what happens if civilization is daydreaming during the process—but there’s probably going to be a “throttle”—and it will probably be carefully monitored—precisely in order to prevent anything untoward from happening.
Hey Tim, you can create another AI safety nonprofit to make sure things happen that way!
;-)
Seriously, I will donate!