The claim that AIs will foom, basically, reduces to the claim that the difficulty of making AGI is front-loaded: that there’s a hump to get over, that we aren’t over it yet, and that once it’s passed things will get much easier. From an outside view, this makes sense; we don’t yet have a working prototype of general intelligence, and the history of invention in general indicates that the first prototype is a major landmark after which the pace of development speeds up dramatically.
But this is a case where the inside and outside views disagree. We all know that AGI is hard, but the people actually working on it get to see the challenges up close. And from that perspective, it’s hard to accept that it will suddenly become much easier once we have a prototype—both because the challenges seem so daunting, the possible breakthroughs are hard to visualize, and on some level, if AI suddenly became easy it would trivialize the challenges that researchers are facing now. So the AGI researchers imagine an AI-Manhattan Project, with resources to match the challenges as they see them, rather than an AI-Kitty Hawk, with a few guys in a basement who are lucky enough to stumble on the final necessary insight.
Since a Manhattan Project-style AI would have lots of resources to spend on ensuring safety, the safety issues don’t seem like a big deal. But if the first AGI were made by some guys in a basement, instead, then they won’t have those resources; and from that perspective, pushing hard for safety measures is important.
Except in this case if ‘prototype’ means genius-human-level AI, then it’s reasonable to assume that even if the further challenges remain daunting, it will be economical to put a lot more effort into them, because researchers will be cheap.
If airplanes were as much better at designing airplanes as they are at flying, Kitty Hawk would have been different.
The claim that AIs will foom, basically, reduces to the claim that the difficulty of making AGI is front-loaded
Yes.
the history of invention in general indicates that the first prototype is a major landmark after which the pace of development speeds up dramatically.
This is not actually true. The history of invention in general indicates that the first prototype accomplishes little, and a great deal of subsequent work needs to be done—even in the case of inventions like machine tools and computers that are used for creating subsequent generations of themselves.
Yes, this is right. Prototypes often precede widespread deployment and impact of a technology by decades until various supporting technologies and incremental improvements make them worth their costs.
The claim that AIs will foom, basically, reduces to the claim that the difficulty of making AGI is front-loaded: that there’s a hump to get over, that we aren’t over it yet, and that once it’s passed things will get much easier. From an outside view, this makes sense; we don’t yet have a working prototype of general intelligence, and the history of invention in general indicates that the first prototype is a major landmark after which the pace of development speeds up dramatically.
But this is a case where the inside and outside views disagree. We all know that AGI is hard, but the people actually working on it get to see the challenges up close. And from that perspective, it’s hard to accept that it will suddenly become much easier once we have a prototype—both because the challenges seem so daunting, the possible breakthroughs are hard to visualize, and on some level, if AI suddenly became easy it would trivialize the challenges that researchers are facing now. So the AGI researchers imagine an AI-Manhattan Project, with resources to match the challenges as they see them, rather than an AI-Kitty Hawk, with a few guys in a basement who are lucky enough to stumble on the final necessary insight.
Since a Manhattan Project-style AI would have lots of resources to spend on ensuring safety, the safety issues don’t seem like a big deal. But if the first AGI were made by some guys in a basement, instead, then they won’t have those resources; and from that perspective, pushing hard for safety measures is important.
Except in this case if ‘prototype’ means genius-human-level AI, then it’s reasonable to assume that even if the further challenges remain daunting, it will be economical to put a lot more effort into them, because researchers will be cheap.
If airplanes were as much better at designing airplanes as they are at flying, Kitty Hawk would have been different.
Or that the effective effort put into AI research (e.g. by AIs) is sufficiently back-loaded.
Yes.
This is not actually true. The history of invention in general indicates that the first prototype accomplishes little, and a great deal of subsequent work needs to be done—even in the case of inventions like machine tools and computers that are used for creating subsequent generations of themselves.
Yes, this is right. Prototypes often precede widespread deployment and impact of a technology by decades until various supporting technologies and incremental improvements make them worth their costs.