But most AI researchers, including Pei Wang, do not buy the idea of AGI’s that can quickly and vastly self-improve themselves to the point of getting out of control.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
Telling most people in the AI community about AI risks is similar to telling neuroscientists that their work might lead to the creation of a society of uploads which will copy themselves millions of times and pose a risk due to the possibility of a value drift. What reaction do you anticipate?
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
To rephrase into a positive belief statement: most AI researches, including Pei Wang, believe that AGI’s are safely controllable.
“Really? Awesome! Let’s get right on that.” (ref. early Eliezer)
Alternatively: ” Hmm? Yes, that’s interesting… it doesn’t apply to my current grant / paper, so… .”
I didn’t expect that you would anticipate that. What I anticipate is outright ridicule of such ideas outside of science fiction novels. At least for most neuroscientists.
Sure, that too.