At one point in the dialog, Scott raises what I think is a valid objection to the “nine people in the basement” picture of FAI’s development. He points out that it’s not how science progresses, and so not how he expects this novel development to happen.
If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what’s already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happens to be already developed, and so the remaining challenge is to find the relevant math, assemble it in the right way, and see the answer. But that doesn’t sound very likely.
Alternatively, a “team in the basement” could wait for the right breakthrough in the mainstream mathematics, and, being prepared, to apply it faster than anyone else to the problem. This seems more realistic, but may require the mainstream to know what to look for. Which involves playing with existential risk.
At one point in the dialog, Scott raises what I think is a valid objection to the “nine people in the basement” picture of FAI’s development. He points out that it’s not how science progresses, and so not how he expects this novel development to happen.
I would like to hear more from Eliezer on just how likely he thinks the ‘nine people in the basement’ development scenario is.
My own impression would be that a more gradual development of GAI is more likely but that that ‘basement development’ is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the ‘nine people in the basement picture’ either wishful thinking or ‘my best plan of action’ depending on whether or not we are Eliezer.
“Breakthroughs” are not really how synthetic intelligence has progressed so far. Look at speech recognition, for example. So far, that has mostly been a long, gradual slog. Maybe we are doing it wrong—and there is an easier way. However, that’s not an isolated example—and if there are easier ways, we don’t seem to be very good at finding them.
The idea of a “breakthrough” denotes a sudden leap forwards. There have been some of those.
One might cite back propagation, for example—but big breakthroughs seem rare, and most progress seems attributable to other factors—much as Robin Hanson claims happens in general: “in large systems most innovation value comes from many small innovations”.
At one point in the dialog, Scott raises what I think is a valid objection to the “nine people in the basement” picture of FAI’s development. He points out that it’s not how science progresses, and so not how he expects this novel development to happen.
If we consider FAI as a mathematical problem that requires a substantial depth of understanding beyond what’s already there to get right, any isolated effort becomes likely hopeless. Mathematical progress is a global effort. I can sorta expect a basement scenario if most of the required math happens to be already developed, and so the remaining challenge is to find the relevant math, assemble it in the right way, and see the answer. But that doesn’t sound very likely.
Alternatively, a “team in the basement” could wait for the right breakthrough in the mainstream mathematics, and, being prepared, to apply it faster than anyone else to the problem. This seems more realistic, but may require the mainstream to know what to look for. Which involves playing with existential risk.
I would like to hear more from Eliezer on just how likely he thinks the ‘nine people in the basement’ development scenario is.
My own impression would be that a more gradual development of GAI is more likely but that that ‘basement development’ is the only way there is even a remote possibility that the development will not lead to rapid human extinction. That would make the ‘nine people in the basement picture’ either wishful thinking or ‘my best plan of action’ depending on whether or not we are Eliezer.
“Breakthroughs” are not really how synthetic intelligence has progressed so far. Look at speech recognition, for example. So far, that has mostly been a long, gradual slog. Maybe we are doing it wrong—and there is an easier way. However, that’s not an isolated example—and if there are easier ways, we don’t seem to be very good at finding them.
Of course, “breakthroughs” is a cumulative impression: now you don’t know how to solve the problem or even how to state it, and 10 years later you do.
The idea of a “breakthrough” denotes a sudden leap forwards. There have been some of those.
One might cite back propagation, for example—but big breakthroughs seem rare, and most progress seems attributable to other factors—much as Robin Hanson claims happens in general: “in large systems most innovation value comes from many small innovations”.