Thanks for the nice post! Here’s why I disagree :)
Technological deployment lag
Normal technologies require (1) people who know how to use the technology, and (2) people who decide to use the technology. If we’re thinking about a “real-deal AGI” that can do pretty much every aspect of a human job but better and cheaper, then (1) isn’t an issue because the AGI can jump into existing human roles. It would be less like “technology deployment” and more like a highly-educated exquisitely-skilled immigrant arriving into a labor market. Such a person would have no trouble getting a job, in any of a million different roles, in weeks not decades. For (2), the same “real-deal AGI” would be able to start companies of its own accord, build factories, market products and services, make money, invest it in starting more companies, etc. etc. So it doesn’t need anyone to “decide to use the technology” or to invest in the technology.
Regulation will slow things down
I think my main disagreement comes from my thinking of AGI development as being “mostly writing and testing code inside R&D departments”, rather than “mostly deploying code to the public and learning from that experience”. I agree that it’s feasible and likely for the latter activity to get slowed down by regulation, but the former seems much harder to regulate for both political reasons and technical reasons.
The political is: It’s easy to get politicians riled up about the algorithms that Facebook is actually using to influence people, and much harder to get politicians riled up about whatever algorithms Facebook is tinkering with (but not actually deploying) in some office building somewhere. I think there would only be political will once we start getting “lab escape accidents” with out-of-control AGIs self-replicating around the internet, or whatever, at which point it may well be too late already.
The technical is: A lot of this development will involve things like open-source frameworks to easily parallelize software, and easier-to-use faster open-source implementations of new algorithms, academic groups publishing papers, and so on. I don’t see any precedent or feasible path for the regulation of these kinds of activities, even if there were the political will.
Not that we shouldn’t develop political and technical methods to regulate that kind of thing—it seems like worth trying to figure out—just that it seems extremely hard to do and unlikely to happen.
Overestimating the generality of AI technology
My own inside-view story (see here for example) is that human intelligence is based around a legible learning algorithm, and that researchers in neuroscience and AI are making good progress in working out exactly how that learning algorithm works, especially in the past 5 years. I’m not going to try to sell you on that story here, but fwiw it’s a short-ish timelines story that doesn’t directly rely on the belief that currently-popular deep learning models are very general, or even necessarily on the right track.
Won’t we have AGI that is slightly less able to jump into existing human roles before we have AGI that can jump into existing human roles? (Borrowing intuitions from Christiano’s Takeoff Speeds) [Edited to remove typo]
Jack, to be specific, we expect to have AI that can jump into specific classes of roles, and take over the entire niche. All of it. They will be narrowly superhuman at any role inside the class.
If right now, strategy games, both of the board and the realtime clicking variety, had direct economic value, every human doing it would already be superfluous. We can fully solve the entire class. The reason is, succinctly:
a. Every game-state can be modeled on a computer with the subsequent state resulting from a move by the AI agent provided
b. The game-state can be reliably converted to a score that is an accurate assessment of what we care about—victory in the game. That is, it’s usually a delayed reward, but a game-state either is winning or it is not and this mapping is reliable.
For real world tasks (b) gets harder because there are subtle outcomes that can’t be immediately perceived, or they are complex to model. Example: an autonomous car reaches the destination but has damaged it’s own components more than the value of the ride.
So it will take longer to solve the class of :
robotics manipulation problems where we can reliably estimate the score resulting from a manipulation, and model reasonably accurately the full environment and the machine in that environment.
This is most industrial and labor tasks on the planet in this class. But the whole class can be solved relatively quickly—once you have a general solver for part of it, the rest of it will fall.
And then the next class of tasks are things where a human being is involved. Humans are complex and we can’t model them in a simulator like we can model rigid bodies and other physics. I can’t predict when this class will be solved.
Thanks for the nice post! Here’s why I disagree :)
Normal technologies require (1) people who know how to use the technology, and (2) people who decide to use the technology. If we’re thinking about a “real-deal AGI” that can do pretty much every aspect of a human job but better and cheaper, then (1) isn’t an issue because the AGI can jump into existing human roles. It would be less like “technology deployment” and more like a highly-educated exquisitely-skilled immigrant arriving into a labor market. Such a person would have no trouble getting a job, in any of a million different roles, in weeks not decades. For (2), the same “real-deal AGI” would be able to start companies of its own accord, build factories, market products and services, make money, invest it in starting more companies, etc. etc. So it doesn’t need anyone to “decide to use the technology” or to invest in the technology.
I think my main disagreement comes from my thinking of AGI development as being “mostly writing and testing code inside R&D departments”, rather than “mostly deploying code to the public and learning from that experience”. I agree that it’s feasible and likely for the latter activity to get slowed down by regulation, but the former seems much harder to regulate for both political reasons and technical reasons.
The political is: It’s easy to get politicians riled up about the algorithms that Facebook is actually using to influence people, and much harder to get politicians riled up about whatever algorithms Facebook is tinkering with (but not actually deploying) in some office building somewhere. I think there would only be political will once we start getting “lab escape accidents” with out-of-control AGIs self-replicating around the internet, or whatever, at which point it may well be too late already.
The technical is: A lot of this development will involve things like open-source frameworks to easily parallelize software, and easier-to-use faster open-source implementations of new algorithms, academic groups publishing papers, and so on. I don’t see any precedent or feasible path for the regulation of these kinds of activities, even if there were the political will.
Not that we shouldn’t develop political and technical methods to regulate that kind of thing—it seems like worth trying to figure out—just that it seems extremely hard to do and unlikely to happen.
My own inside-view story (see here for example) is that human intelligence is based around a legible learning algorithm, and that researchers in neuroscience and AI are making good progress in working out exactly how that learning algorithm works, especially in the past 5 years. I’m not going to try to sell you on that story here, but fwiw it’s a short-ish timelines story that doesn’t directly rely on the belief that currently-popular deep learning models are very general, or even necessarily on the right track.
Won’t we have AGI that is slightly less able to jump into existing human roles before we have AGI that can jump into existing human roles? (Borrowing intuitions from Christiano’s Takeoff Speeds) [Edited to remove typo]
Sure but that would make OP’s point weaker not stronger, right?
Jack, to be specific, we expect to have AI that can jump into specific classes of roles, and take over the entire niche. All of it. They will be narrowly superhuman at any role inside the class.
If right now, strategy games, both of the board and the realtime clicking variety, had direct economic value, every human doing it would already be superfluous. We can fully solve the entire class. The reason is, succinctly:
a. Every game-state can be modeled on a computer with the subsequent state resulting from a move by the AI agent provided
b. The game-state can be reliably converted to a score that is an accurate assessment of what we care about—victory in the game. That is, it’s usually a delayed reward, but a game-state either is winning or it is not and this mapping is reliable.
For real world tasks (b) gets harder because there are subtle outcomes that can’t be immediately perceived, or they are complex to model. Example: an autonomous car reaches the destination but has damaged it’s own components more than the value of the ride.
So it will take longer to solve the class of :
robotics manipulation problems where we can reliably estimate the score resulting from a manipulation, and model reasonably accurately the full environment and the machine in that environment.
This is most industrial and labor tasks on the planet in this class. But the whole class can be solved relatively quickly—once you have a general solver for part of it, the rest of it will fall.
And then the next class of tasks are things where a human being is involved. Humans are complex and we can’t model them in a simulator like we can model rigid bodies and other physics. I can’t predict when this class will be solved.