Eliezer: So really, the whole hard takeoff analysis of “flatline or FOOM” just ends up saying, “the AI will not hit the human timescale keyhole.” From our perspective, an AI will either be so slow as to be bottlenecked, or so fast as to be FOOM.
But the AI is tied up with the human timescale at the start. All of the work on improving the AI, possibly for many years, until it reaches very high intelligence, will be done by humans. And even after, it will still be tied up with the human economy for a time, relying on humans to build parts for it, etc. Remember that I’m only questioning the trajectory for the first year or decade.
(BTW, the term “trajectory” implies that only the state of the entity at the top of the heap matters. One of the human race’s backup plans should be to look for a niche in the rest of the heap. But I’ve already said my piece on that in earlier comments.)
Thomas: Even if it is wrong—I think it is correct—it is the most important thing to consider.
I think most of us agree it’s possible. I’m only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.
(BTW, the term “trajectory” implies that only the state of the entity at the top of the heap matters. One of the human race’s backup plans should be to look for a niche in the rest of the heap. But I’ve already said my piece on that in earlier comments.)
I think most of us agree it’s possible. I’m only arguing that other possibilities should also be considered. It would be unwise to adopt a strategy that has a 1% chance of making 90%-chance situation A survivable, if that strategy will make the otherwise-survivable 10%-chance situation B deadly.