What do you feel are the most pressing unsolved problems in AGI?
In AGI? If you mean “what problems in AI do we need to solve before we can get to the human level”, then I would say:
Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
Ways to cope with domain adaptation and model mis-specification.
Robust and modular statistical procedures that can be fruitfully fit together.
Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).
To some extent this reflects my own biases, and I don’t mean to say “if we solve these problems then we’ll basically have AI”, but I do think it will either get us much closer or else expose new challenges that are not currently apparent.
Do you believe AGI can “FOOM” (you may have to qualify what you interpret FOOM as)?
I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).
In general I think this is one of many possible scenarios, e.g. it’s also possible that sub-human AI would already have control of much of the world’s resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn’t stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.
How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?
Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what’s your estimate?
I don’t personally work on AGI and I don’t think the majority of “AGI progress” comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don’t even know if thinking in terms of “number of AI researchers” is the right framing. That said, I’ll try to answer your question.
I’m worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the “human-level” AI vision, but also there are many people who are in the field who don’t go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn’t aware of or didn’t think of as being AI-relevant, and who turn out to be working on problems that are important to me.
We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn’t really know what we were doing and just wanted to get something off the ground.
Over time we’ve asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don’t spend teaching.
Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?
(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don’t have a clean way of handling computational constraints, and I think it’s important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.
Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).
Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it’s something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.
I wouldn’t presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I’ve only spent serious time at a few universities. However, I can speculate based on the data I do have.
I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong’s existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I’m mostly going off of demographics, I don’t know that many who have told me they read HPMOR.
There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we’re going to end up killing everyone, although not for too long.
To address your comment in the grandchild, I certainly don’t speak for Norvig but I would guess that “Norvig takes these [MIRI] ideas seriously” is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like “Hey you guys just said a bunch of stuff, based on what people in AI actually do, here’s the parts that seem true and here’s the part that seem false.” It’s also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. “Norvig takes the singularity seriously” seems much more likely to be true to me, though again, I’m far from being in a position to make informed statements about his views.
What’s the quote? You may very well have better knowledge of Norvig’s opinions in particular than I do. I’ve only talked to him in person twice briefly, neither time about AGI, and I haven’t read his book.
Hm...I personally find it hard to divine much about Norvig’s personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).
When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.
Right, there was a typo. I’ve fixed it now. I’m just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.
And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it’s popular in top physics departments.
I’m a PhD student in artificial intelligence, and co-creator of the SPARC summer program. AMA.
What do you feel are the most pressing unsolved problems in AGI?
Do you believe AGI can “FOOM” (you may have to qualify what you interpret FOOM as)?
How viable is the scenario of someone creating a AGI in their basement, thereby changing the course of history in unpredictable ways?
In AGI? If you mean “what problems in AI do we need to solve before we can get to the human level”, then I would say:
Ability to solve currently intractable statistical inference problems (probably not just by scaling up computational resources, since many of these problems have exponentially large search spaces).
Ways to cope with domain adaptation and model mis-specification.
Robust and modular statistical procedures that can be fruitfully fit together.
Large amounts of data, in formats helpful for learning (potentially including provisions for high-throughput interaction, perhaps with a virtual environment).
To some extent this reflects my own biases, and I don’t mean to say “if we solve these problems then we’ll basically have AI”, but I do think it will either get us much closer or else expose new challenges that are not currently apparent.
I think it is possible that a human-level AI would very quickly acquire a lot of resources / power. I am more skeptical that an AI would become qualitatively more intelligent than a human, but even if it was no more intelligent than a human but had the ability to easily copy and transmit itself, that would already make it powerful enough to be a serious threat (note that it is also quite possible that it would have many more cycles of computation per second than a biological brain).
In general I think this is one of many possible scenarios, e.g. it’s also possible that sub-human AI would already have control of much of the world’s resources and we would have built systems in place to deal with this fact. So I think it can be useful to imagine such a scenario but I wouldn’t stake my decisions on the assumption that something like it will occur. I think this report does a decent job of elucidating the role of such narratives (not necessarily AI-related) in making projections about the future.
Not viable.
Do you have a handle on the size of the field? E.g. how many people, counting from PhD students and upwards, are working on AGI in the entire world? More like 100 or more like 10,000 or what’s your estimate?
I don’t personally work on AGI and I don’t think the majority of “AGI progress” comes from people who label themselves as working on AGI. I think much of the progress comes from improved tools due to research and usage in machine learning and statistics. There are also of course people in these fields who are more concerned with pushing in the direction of human-level capabilities. And progress everywhere is so inter-woven that I don’t even know if thinking in terms of “number of AI researchers” is the right framing. That said, I’ll try to answer your question.
I’m worried that I may just be anchoring off of your two numbers, but I think 10^3 is a decent estimate. There are upwards of a thousand people at NIPS and ICML (two of the main machine learning conferences), only a fraction of those people are necessarily interested in the “human-level” AI vision, but also there are many people who are in the field who don’t go to these conferences in any given year. Also many people in natural language processing and computer vision may be interested in these problems, and I recently found out that the program analysis community cares about at least some questions that 40 years ago would have been classified under AI. So the number is hard to estimate but 10^3 might be a rough order of magnitude. I expect to find more communities in the future that I either wasn’t aware of or didn’t think of as being AI-relevant, and who turn out to be working on problems that are important to me.
How did you come up with the course content for SPARC?
We brainstormed things that we know now that we wished we had known in high school. During the first year, we just made courses out of those (also borrowing from CFAR workshops) and rolled with that, because we didn’t really know what we were doing and just wanted to get something off the ground.
Over time we’ve asked ourselves what the common thread is in our various courses, in an attempt to develop a more coherent curriculum. Three major themes are statistics, programming, and life skills. The thing these have in common is that they are some of the key skills that extremely sharp quantitative minds need to apply their skills to a qualitative world. Of course, it will always be the case that most of the value of SPARC comes from informal discussions rather than formal lectures, and I think one of the best things about SPARC is the amount of time that we don’t spend teaching.
Could you talk about your graduate work in AI? Also, out of curiosity, did you weight possible contribution towards a positive singularity heavily in choosing your subfield/projects?
(I am trying to figure out whether it would be productive for me to become familiar with AI in mainstream academia and/or apply for PhD programs eventually.)
I work on computationally bounded statistical inference. Most theoretical paradigms don’t have a clean way of handling computational constraints, and I think it’s important to address this since the computationally complexity of exact statistical inference scales extremely rapidly with model complexity. I also have recently starting working on applications in program analysis, both because I think it provides a good source of computationally challenging problems, and because it seems like a domain that will force us into using models with high complexity.
Singularity considerations were a factor when choosing to work on AI, although I went into the field because AI seems like a robustly game-changing technology across a wide variety of scenarios, whether or not a singularity occurs. I certainly think that software safety is an important issue more broadly, and this partially influences my choice of problems, although I am more guided by the problems that seem technically important (and indeed, I think this is mostly the right strategy even if you care about safety to a fair degree).
Learning more about mainstream AI has greatly shaped my beliefs regarding AGI, so it’s something that I would certainly recommend. Going to grad school shaped my beliefs even further, even though I had already read many AI papers prior to arriving at Stanford.
Is there any uptake of MIRI ideas in the AI community? Of HPMOR?
I wouldn’t presume to know what the field as a whole thinks, as I think views vary a lot from place to place and I’ve only spent serious time at a few universities. However, I can speculate based on the data I do have.
I think a sizable number (25%?) of AI graduate students I know are aware of LessWrong’s existence. Also a sizeable (although probably smaller) number have read at least a few chapters of HPMOR; for the latter I’m mostly going off of demographics, I don’t know that many who have told me they read HPMOR.
There is very little actual discussion of MIRI or LessWrong. From what I would gather most people silently disagree with MIRI, a few people probably silently agree. I would guess almost no one knows what MIRI is, although more would have heard of the Singularity Institute (but might confuse it with Singularity University). People do occasionally wonder whether we’re going to end up killing everyone, although not for too long.
To address your comment in the grandchild, I certainly don’t speak for Norvig but I would guess that “Norvig takes these [MIRI] ideas seriously” is probably false. He does talk at the Singularity Summit, but the tone when I attended his talk sounded more like “Hey you guys just said a bunch of stuff, based on what people in AI actually do, here’s the parts that seem true and here’s the part that seem false.” It’s also important to note that the notion of the singularity is much more widespread as a concept than MIRI in particular. “Norvig takes the singularity seriously” seems much more likely to be true to me, though again, I’m far from being in a position to make informed statements about his views.
Thanks. I was basing my comments about Norvig on what he says in the intro to his AI textbook, which does address UFAI risk.
What’s the quote? You may very well have better knowledge of Norvig’s opinions in particular than I do. I’ve only talked to him in person twice briefly, neither time about AGI, and I haven’t read his book.
Russell and Norvig, Artificial Intelligence: A Modern Approach. Third Edition, 2010, pp. 1037 − 1040. Available here.
I think the key quote here is:
Hm...I personally find it hard to divine much about Norvig’s personal views from this. It seems like a relatively straightforward factual statement about the state of the field (possibly hedging to the extent that I think the arguments in favor of strong AI being possible are relatively conclusive, i.e. >90% in favor of possibility).
When I spoke to Norvig at the 2012 Summit, he seemed to think getting good outcomes from AGI could indeed be pretty hard, but also that AGI was probably a few centuries away. IIRC.
Interesting, thanks.
Like Mark, I’m not sure I was able to parse your question, can you please clarify?
Right, there was a typo. I’ve fixed it now. I’m just wondering if MIRI-like ideas are spreading among AI researchers. We see that Norvig take these ideas seriously.
And separately, I wonder if HPMOR is a fad in elite AI circles. I have heard that it’s popular in top physics departments.
What does that question mean?
Sorry, typo now fixed. See my response to jsteinhardt below.