I wouldn’t presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I’ve only spent serious time at a few universities. However, I can speculate based on the data I do have.
I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong’s existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I’m mostly going off of demographics, I don’t know that many who have told me they read HPMOR.
There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we’re going to end up killing everyone, although not for too long.
To address your comment in the grandchild, I certainly don’t speak for Norvig but I would guess that “Norvig takes these [MIRI] ideas seriously” is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like “Hey you guys just said a bunch of stuff, based on what people in AI actually do, here’s the parts that seem true and here’s the part that seem false.” It’s also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. “Norvig takes the singularity seriously” seems much more likely to be true to me, though again, I’m far from being in a position to make informed statements about his views.
What’s the quote? You may very well have better knowledge of Norvig’s opinions in particular than I do. I’ve only talked to him in person twice briefly, neither time about AGI, and I haven’t read his book.
Hm...I personally find it hard to divine much about Norvig’s personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).
When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.
Right, there was a typo. I’ve fixed it now. I’m just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.
And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it’s popular in top physics departments.
Is there any uptake of MIRI ideas in the AI community? Of HPMOR?
I wouldn’t presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I’ve only spent serious time at a few universities. However, I can speculate based on the data I do have.
I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong’s existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I’m mostly going off of demographics, I don’t know that many who have told me they read HPMOR.
There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we’re going to end up killing everyone, although not for too long.
To address your comment in the grandchild, I certainly don’t speak for Norvig but I would guess that “Norvig takes these [MIRI] ideas seriously” is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like “Hey you guys just said a bunch of stuff, based on what people in AI actually do, here’s the parts that seem true and here’s the part that seem false.” It’s also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. “Norvig takes the singularity seriously” seems much more likely to be true to me, though again, I’m far from being in a position to make informed statements about his views.
Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.
What’s the quote? You may very well have better knowledge of Norvig’s opinions in particular than I do. I’ve only talked to him in person twice briefly, neither time about AGI, and I haven’t read his book.
Russell and Norvig, Artificial Intelligence: A Modern Approach. Third Edition, 2010, pp. 1037 − 1040. Available here.
I think the key quote here is:
Hm...I personally find it hard to divine much about Norvig’s personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).
When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.
Interesting, thanks.
Like Mark, I’m not sure I was able to parse your question, can you please clarify?
Right, there was a typo. I’ve fixed it now. I’m just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.
And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it’s popular in top physics departments.
What does that question mean?
Sorry, typo now fixed. See my response to jsteinhardt below.