And then Paul’s response to Eliezer is like “but engines don’t just appear without precedent, there’s worse partial versions of them beforehand, much more so if people are actually trying to do locomotion; so even if knocking out a piece of the AI that FOOMs would make it FOOM much slower, that doesn’t tell us much about the lead-up to FOOM, and doesn’t tell us that the design considerations that go into the FOOMer are particularly discontinuous with previously explored design considerations”?
Right, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn’t just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth’s gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That’s why the industrial revolution(s) took hundreds of years. It wasn’t one sudden insight that made it all click.
To which my Eliezer-model’s response is “Indeed, we should expect that the first AGI systems will be pathetic in relative terms, comparing them to later AGI systems. But the impact of the first AGI systems in absolute terms is dependent on computer-science facts, just as the impact of the first nuclear bombs was dependent on facts of nuclear physics. Nuclear bombs have improved enormously since Trinity and Little Boy, but there is no law of nature requiring all prototypes to have approximately the same real-world impact, independent of what the thing is a prototype of.”
And then Paul’s response to Eliezer is like “but engines don’t just appear without precedent, there’s worse partial versions of them beforehand, much more so if people are actually trying to do locomotion; so even if knocking out a piece of the AI that FOOMs would make it FOOM much slower, that doesn’t tell us much about the lead-up to FOOM, and doesn’t tell us that the design considerations that go into the FOOMer are particularly discontinuous with previously explored design considerations”?
Right, and history sides with Paul. The earliest steam engines were missing key insights and so operated slowly, used their energy very inefficiently, and were limited in what they could do. The first steam engines were used as pumps, and it took a while before they were powerful enough to even move their own weight (locomotion). Each progressive invention, from Savery to Newcomen to Watt dramatically improved the efficiency of the engine, and over time engines could do more and more things, from pumping to locomotion to machining to flight. It wasn’t just one sudden innovation and now we have an engine that can do all the things including even lifting itself against the pull of Earth’s gravity. It took time, and progress on smooth metrics, before we had extremely powerful and useful engines that powered the industrial revolution. That’s why the industrial revolution(s) took hundreds of years. It wasn’t one sudden insight that made it all click.
To which my Eliezer-model’s response is “Indeed, we should expect that the first AGI systems will be pathetic in relative terms, comparing them to later AGI systems. But the impact of the first AGI systems in absolute terms is dependent on computer-science facts, just as the impact of the first nuclear bombs was dependent on facts of nuclear physics. Nuclear bombs have improved enormously since Trinity and Little Boy, but there is no law of nature requiring all prototypes to have approximately the same real-world impact, independent of what the thing is a prototype of.”