I think it might be useful to have stories like these, and it’s well written; however:
Plausible A.I. Takeoff Scenario Short Story
,
I am running on a regular laptop computer, in a residential area in Wellington, New Zealand.
These two things are in contradiction. It’s not a plausible scenario if the AGI begins on a laptop. It’s far more likely to begin on the best computer in the world owned by OpenAI or something. Absent a disclaimer, this would be a reason for me not to share this.
If AGI entails a gradual accumulation of lots of little ideas and best practices, then the story is doubly implausible in that (1) the best AGI would probably be at a big institution (as you mention), and (2) the world would already be flooded with slightly-sub-AGIs that have picked low-hanging fruit like Mechanical Turk. (And there wouldn’t be a sharp line between “slightly-sub-AGI” and “AGI” anyway.)
But I don’t think we should rule out the scenario where AGI entails a small number of new insights, or one insight which is “the last piece of the puzzle”, and where a bright and lucky grad student in Wellington could be the one who puts it all together, and where a laptop is sufficient to bootstrap onto better hardware as discussed in the story. In fact I see this “new key insight” story as fairly plausible, based on my belief that human intelligence doesn’t entail that many interacting pieces (further discussion here), and some (vague) thinking about what the pieces are and how well the system would work when one of those pieces is removed.
I don’t make a strong claim that it will definitely be like the “new key insight” story and not the “gradual accumulation of best practices” story. I just think neither scenario can be ruled out, or at least that’s my current thinking.
I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day. Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
To make up a number, I’d put it at < 5% given that the way it goes down is what I would classify under the final-insight model.
Hmmm. I agree about “independent person”—I don’t think a lot of “independent persons” are working on AGI, or that they (collectively) have a high chance of success (with all due respect to the John Carmacks of the world!).
I guess the question is what category you put grad students, postdocs, researchers, and others in small research groups, especially at universities. They’re not necessarily “paid a ton of money” (I sure wasn’t!), but they do “work on this stuff all day”. If you look at the list of institutions submitting NeurIPS 2019 papers, there’s a very long tail of people at small research groups, who seem to collectively comprise the vast majority of submissions, as far as I can tell. (I haven’t done the numbers. I guess it depends on where we draw the line between “small research groups” and “big”… Also there are a lot of university-industry collaborations, which complicates the calculation.)
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
I meant “independent person” as in, someone not part of the biggest labs
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
Since most researchers are outside of big labs, they’re going to publish more papers. I’m not convinced that means much of anything. I could see usefulness vary by factors of well over 100. Some papers might even negative utility. I think all of the impressive AI’s we’ve seen, without any real exception, have come out of big research labs.
Also, I believe you’re assuming that research will continue to be open. I think it’s more likely that it won’t be, although not 95%.
But ultimately I’m out of my depth on this discussion.
I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
First off, let me say that I could easily be wrong. My belief is both fairly low confidence and not particularly high information.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
I don’t think any of that follows. Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
It doesn’t matter that much, but it makes it a bit harder—it implies that someone outside of the top research labs not only has the insight first, but has it first and then the labs don’t have it for some amount of time.
Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
But we’re not comparing the probability of “a successful start-up will be created” vs. the probability of “an AGI will be created” in the next x years, we’re comparing the probability of “an AGI will be created by a large organization” vs. the probability of “an AGI will be created by a single person on his laptop” given that an AGI will be created.
Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering—a highly technical field—can out-innovate established organizations like Lockheed Martin, why wouldn’t the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.
I agree, it’s more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I’m just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety—both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal.
Yeah, I don’t actually disagree. It’s just that, if someone asks “how could an AI actually be dangerous? It’s just on a computer” and I respond by “here look at this cool story someone wrote which answers that question”, they might go “Aha, you think it will be developed on a laptop. This is clearly nonsense, therefore I now dismiss your case entirely”. I think you wanna bend over backwards to not make misleading statements if you argue for the dangers-from-ai-is-a-real-thing side.
You’re of course correct that any scenario with this level of detail is necessarily extremely unlikely, but I think that will be more obvious for other details like how exactly the AI reasons than it is for the above. I don’t see anyone going “aha, the AI reasoned that X→Y→Z which is clearly implausible because it’s specific, therefore I won’t take this seriously”.
If I had written this, I would add a disclaimer rather than change the title. The disclaimer could also explain that “paperclips” is a stand-in for any utility function that maximizes for just a particular physical thing.
I think it might be useful to have stories like these, and it’s well written; however:
,
These two things are in contradiction. It’s not a plausible scenario if the AGI begins on a laptop. It’s far more likely to begin on the best computer in the world owned by OpenAI or something. Absent a disclaimer, this would be a reason for me not to share this.
Also, typo:
If AGI entails a gradual accumulation of lots of little ideas and best practices, then the story is doubly implausible in that (1) the best AGI would probably be at a big institution (as you mention), and (2) the world would already be flooded with slightly-sub-AGIs that have picked low-hanging fruit like Mechanical Turk. (And there wouldn’t be a sharp line between “slightly-sub-AGI” and “AGI” anyway.)
But I don’t think we should rule out the scenario where AGI entails a small number of new insights, or one insight which is “the last piece of the puzzle”, and where a bright and lucky grad student in Wellington could be the one who puts it all together, and where a laptop is sufficient to bootstrap onto better hardware as discussed in the story. In fact I see this “new key insight” story as fairly plausible, based on my belief that human intelligence doesn’t entail that many interacting pieces (further discussion here), and some (vague) thinking about what the pieces are and how well the system would work when one of those pieces is removed.
I don’t make a strong claim that it will definitely be like the “new key insight” story and not the “gradual accumulation of best practices” story. I just think neither scenario can be ruled out, or at least that’s my current thinking.
I actually agree that the “last key insight” is somewhat plausible, but I think even if we assume that, it remains quite unlikely that an independent person has this insight rather than the people who are being paid a ton of money to work on this stuff all day. Especially because even in the insight-model, there could still be some amount of details that need to be figured out after the insight, which might only take a couple of weeks for OpenAI but probably longer for a single person.
To make up a number, I’d put it at < 5% given that the way it goes down is what I would classify under the final-insight model.
Hmmm. I agree about “independent person”—I don’t think a lot of “independent persons” are working on AGI, or that they (collectively) have a high chance of success (with all due respect to the John Carmacks of the world!).
I guess the question is what category you put grad students, postdocs, researchers, and others in small research groups, especially at universities. They’re not necessarily “paid a ton of money” (I sure wasn’t!), but they do “work on this stuff all day”. If you look at the list of institutions submitting NeurIPS 2019 papers, there’s a very long tail of people at small research groups, who seem to collectively comprise the vast majority of submissions, as far as I can tell. (I haven’t done the numbers. I guess it depends on where we draw the line between “small research groups” and “big”… Also there are a lot of university-industry collaborations, which complicates the calculation.)
(Admittedly, not all papers are equally insightful, and maybe OpenAI & DeepMind’s papers are more insightful than average, but I don’t think that’s a strong enough effect to make them account for “most” AI insights.)
See also: long discussion thread on groundbreaking PhD dissertations through history, ULMFiT, the Wright Brothers, Grigori Perelman, Einstein, etc.
I meant “independent person” as in, someone not part of the biggest labs
Since most researchers are outside of big labs, they’re going to publish more papers. I’m not convinced that means much of anything. I could see usefulness vary by factors of well over 100. Some papers might even negative utility. I think all of the impressive AI’s we’ve seen, without any real exception, have come out of big research labs.
Also, I believe you’re assuming that research will continue to be open. I think it’s more likely that it won’t be, although not 95%.
But ultimately I’m out of my depth on this discussion.
If that were true, start-ups wouldn’t be a thing, we’d all be using Yahoo Search and Lockheed Martin would be developing the first commercially successful reusable rocket. Hell, it might even make sense to switch to planned economy outright then.
But why does it matter? Would screaming at the top of your lungs about your new discovery (or the modern equivalent, publishing a research paper on the internet) be the first thing someone who has just gained the key insight does? It certainly would be unwise to.
First off, let me say that I could easily be wrong. My belief is both fairly low confidence and not particularly high information.
I don’t think any of that follows. Any good idea can be enough for a successful start-up. AGI is extremely narrow compared to the entire space of good ideas.
It doesn’t matter that much, but it makes it a bit harder—it implies that someone outside of the top research labs not only has the insight first, but has it first and then the labs don’t have it for some amount of time.
But we’re not comparing the probability of “a successful start-up will be created” vs. the probability of “an AGI will be created” in the next x years, we’re comparing the probability of “an AGI will be created by a large organization” vs. the probability of “an AGI will be created by a single person on his laptop” given that an AGI will be created.
Without the benefit of hindsight, is PageRank and reusable rockets any more obvious than the hypothesized AGI key insight? If someone who had no previous experience working in aeronautical engineering—a highly technical field—can out-innovate established organizations like Lockheed Martin, why wouldn’t the same hold true for AGI? If anything, the theoretical foundations of AGI is less well-established and the entry barrier lower by comparison.
Typo corrected, thanks for that.
I agree, it’s more likely for the first AGI to begin on a supercomputer at a well-funding institution. If you like, you can imagine that this AGI is not the first, but simply the first not effectively boxed. Maybe its programmer simply implemented a leaked algorithm that was developed and previously run by a large project, but changed the goal and tweaked the safeties.
In any case, it’s a story, not a prediction, and I’d defend it as plausible in that context. Any story has a thousand assumptions and events that, in sequence, reduce the probability to infinitesimal. I’m just trying to give a sense of what a takeoff could be like when there is a large hardware overhang and no safety—both of which have only a small-ish chance of occurring. That in mind, do you have an alternative suggestion for the title?
Yeah, I don’t actually disagree. It’s just that, if someone asks “how could an AI actually be dangerous? It’s just on a computer” and I respond by “here look at this cool story someone wrote which answers that question”, they might go “Aha, you think it will be developed on a laptop. This is clearly nonsense, therefore I now dismiss your case entirely”. I think you wanna bend over backwards to not make misleading statements if you argue for the dangers-from-ai-is-a-real-thing side.
You’re of course correct that any scenario with this level of detail is necessarily extremely unlikely, but I think that will be more obvious for other details like how exactly the AI reasons than it is for the above. I don’t see anyone going “aha, the AI reasoned that X→Y→Z which is clearly implausible because it’s specific, therefore I won’t take this seriously”.
If I had written this, I would add a disclaimer rather than change the title. The disclaimer could also explain that “paperclips” is a stand-in for any utility function that maximizes for just a particular physical thing.
That’s a good point, I’ll write up a brief explanation/disclaimer and put it in as a footnote.
There are some additional it’s/its mistakes on your text, e.g. here:
Thanks!