It seems to me that Eliezer’s model of AGI is bit like an engine, where if any important part is missing, the entire engine doesn’t move. You can move a broken steam locomotive as fast as you can push it, maybe 1km/h. The moment you insert the missing part, the steam locomotive accelerates up to 100km/h. Paul is asking “when does the locomotive move at 20km/h” and Eliezer says “when the locomotive is already at full steam and accelerating to 100km/h.” There’s no point where the locomotive is moving at 20km/h and not accelerating, because humans can’t push it that fast, and once the engine is working, it’s already accelerating to a much faster speed.
In Paul’s model, there IS such a thing as 95% AGI, and it’s 80% or 20% or 2% as powerful on some metric we can measure, whereas in Eliezer’s model there’s no such thing as 95% AGI. The 95% AGI is like a steam engine that’s missing it’s pistons, or some critical valve, and so it doesn’t provide any motive power at all. It can move as fast as humans can push it, but it doesn’t provide any power of it’s own.
And then Paul’s response to Eliezer is like “but engines don’t just appear without precedent, there’s worse partial versions of them beforehand, much more so if people are actually trying to do locomotion; so even if knocking out a piece of the AI that FOOMs would make it FOOM much slower, that doesn’t tell us much about the lead-up to FOOM, and doesn’t tell us that the design considerations that go into the FOOMer are particularly discontinuous with previously explored design considerations”?
Right, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn’t just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth’s gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That’s why the industrial revolution(s) took hundreds of years. It wasn’t one sudden insight that made it all click.
To which my Eliezer-model’s response is “Indeed, we should expect that the first AGI systems will be pathetic in relative terms, comparing them to later AGI systems. But the impact of the first AGI systems in absolute terms is dependent on computer-science facts, just as the impact of the first nuclear bombs was dependent on facts of nuclear physics. Nuclear bombs have improved enormously since Trinity and Little Boy, but there is no law of nature requiring all prototypes to have approximately the same real-world impact, independent of what the thing is a prototype of.”
It seems to me that Eliezer’s model of AGI is bit like an engine, where if any important part is missing, the entire engine doesn’t move. You can move a broken steam locomotive as fast as you can push it, maybe 1km/h. The moment you insert the missing part, the steam locomotive accelerates up to 100km/h. Paul is asking “when does the locomotive move at 20km/h” and Eliezer says “when the locomotive is already at full steam and accelerating to 100km/h.” There’s no point where the locomotive is moving at 20km/h and not accelerating, because humans can’t push it that fast, and once the engine is working, it’s already accelerating to a much faster speed.
In Paul’s model, there IS such a thing as 95% AGI, and it’s 80% or 20% or 2% as powerful on some metric we can measure, whereas in Eliezer’s model there’s no such thing as 95% AGI. The 95% AGI is like a steam engine that’s missing it’s pistons, or some critical valve, and so it doesn’t provide any motive power at all. It can move as fast as humans can push it, but it doesn’t provide any power of it’s own.
And then Paul’s response to Eliezer is like “but engines don’t just appear without precedent, there’s worse partial versions of them beforehand, much more so if people are actually trying to do locomotion; so even if knocking out a piece of the AI that FOOMs would make it FOOM much slower, that doesn’t tell us much about the lead-up to FOOM, and doesn’t tell us that the design considerations that go into the FOOMer are particularly discontinuous with previously explored design considerations”?
Right, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn’t just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth’s gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That’s why the industrial revolution(s) took hundreds of years. It wasn’t one sudden insight that made it all click.
To which my Eliezer-model’s response is “Indeed, we should expect that the first AGI systems will be pathetic in relative terms, comparing them to later AGI systems. But the impact of the first AGI systems in absolute terms is dependent on computer-science facts, just as the impact of the first nuclear bombs was dependent on facts of nuclear physics. Nuclear bombs have improved enormously since Trinity and Little Boy, but there is no law of nature requiring all prototypes to have approximately the same real-world impact, independent of what the thing is a prototype of.”