I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day. Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
To make up a number, I’d put it at < 5% given that the way it goes down is what I would classify under the final-insight model.
Hmmm. I agree about “independent person”—I don’t think a lot of “independent persons” are working on AGI, or that they (collectively) have a high chance of success (with all due respect to the John Carmacks of the world!).
I guess the question is what category you put grad students, postdocs, researchers, and others in small research groups, especially at universities. They’re not necessarily “paid a ton of money” (I sure wasn’t!), but they do “work on this stuff all day”. If you look at the list of institutions submitting NeurIPS 2019 papers, there’s a very long tail of people at small research groups, who seem to collectively comprise the vast majority of submissions, as far as I can tell. (I haven’t done the numbers. I guess it depends on where we draw the line between “small research groups” and “big”… Also there are a lot of university-industry collaborations, which complicates the calculation.)
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
I meant “independent person” as in, someone not part of the biggest labs
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
Since most researchers are outside of big labs, they’re going to publish more papers. I’m not convinced that means much of anything. I could see usefulness vary by factors of well over 100. Some papers might even negative utility. I think all of the impressive AI’s we’ve seen, without any real exception, have come out of big research labs.
Also, I believe you’re assuming that research will continue to be open. I think it’s more likely that it won’t be, although not 95%.
But ultimately I’m out of my depth on this discussion.
I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
First off, let me say that I could easily be wrong. My belief is both fairly low confidence and not particularly high information.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
I don’t think any of that follows. Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
It doesn’t matter that much, but it makes it a bit harder—it implies that someone outside of the top research labs not only has the insight first, but has it first and then the labs don’t have it for some amount of time.
Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
But we’re not comparing the probability of “a successful start-up will be created” vs. the probability of “an AGI will be created” in the next x years, we’re comparing the probability of “an AGI will be created by a large organization” vs. the probability of “an AGI will be created by a single person on his laptop” given that an AGI will be created.
Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering—a highly technical field—can out-innovate established organizations like Lockheed Martin, why wouldn’t the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.
I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day. Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
To make up a number, I’d put it at < 5% given that the way it goes down is what I would classify under the final-insight model.
Hmmm. I agree about “independent person”—I don’t think a lot of “independent persons” are working on AGI, or that they (collectively) have a high chance of success (with all due respect to the John Carmacks of the world!).
I guess the question is what category you put grad students, postdocs, researchers, and others in small research groups, especially at universities. They’re not necessarily “paid a ton of money” (I sure wasn’t!), but they do “work on this stuff all day”. If you look at the list of institutions submitting NeurIPS 2019 papers, there’s a very long tail of people at small research groups, who seem to collectively comprise the vast majority of submissions, as far as I can tell. (I haven’t done the numbers. I guess it depends on where we draw the line between “small research groups” and “big”… Also there are a lot of university-industry collaborations, which complicates the calculation.)
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
See also: long discussion thread on groundbreaking PhD dissertations through history, ULMFiT, the Wright Brothers, Grigori Perelman, Einstein, etc.
I meant “independent person” as in, someone not part of the biggest labs
Since most researchers are outside of big labs, they’re going to publish more papers. I’m not convinced that means much of anything. I could see usefulness vary by factors of well over 100. Some papers might even negative utility. I think all of the impressive AI’s we’ve seen, without any real exception, have come out of big research labs.
Also, I believe you’re assuming that research will continue to be open. I think it’s more likely that it won’t be, although not 95%.
But ultimately I’m out of my depth on this discussion.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
First off, let me say that I could easily be wrong. My belief is both fairly low confidence and not particularly high information.
I don’t think any of that follows. Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
It doesn’t matter that much, but it makes it a bit harder—it implies that someone outside of the top research labs not only has the insight first, but has it first and then the labs don’t have it for some amount of time.
But we’re not comparing the probability of “a successful start-up will be created” vs. the probability of “an AGI will be created” in the next x years, we’re comparing the probability of “an AGI will be created by a large organization” vs. the probability of “an AGI will be created by a single person on his laptop” given that an AGI will be created.
Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering—a highly technical field—can out-innovate established organizations like Lockheed Martin, why wouldn’t the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.