Tyrrell:
My impression is that you’re overstating Robin’s case. The main advantage of his model seems to be that it gives numbers, which is perhaps nice, but it’s not at all clear why those numbers should be correct. It seems like they assume a regularity between some rather uncomparable things, which one can draw parallels between using the abstractions of economics; but it’s not so very clear that they apply. Eliezer’s point with the Fermi thing isn’t “I’m Fermi!” or “you’re Fermi!”, but just that since powerful ideas have a tendency to cascade and open doors to more powerful ideas, it seems likely that not too long before a self-improving AI takes off as a result of a sufficiently powerful set of ideas, leading AI researchers will be still uncertain of whether such a thing will take months, years, or decades, and reasonably so. In other words, this accumulation of ideas is likely to explode at some point, but our abstractions (at least economic ones) are not a good enough fit to the problem to say when or how. But the point is that such an explosion of ideas would lead to the hard takeoff scenario.
Tyrrell: My impression is that you’re overstating Robin’s case. The main advantage of his model seems to be that it gives numbers, which is perhaps nice, but it’s not at all clear why those numbers should be correct. It seems like they assume a regularity between some rather uncomparable things, which one can draw parallels between using the abstractions of economics; but it’s not so very clear that they apply. Eliezer’s point with the Fermi thing isn’t “I’m Fermi!” or “you’re Fermi!”, but just that since powerful ideas have a tendency to cascade and open doors to more powerful ideas, it seems likely that not too long before a self-improving AI takes off as a result of a sufficiently powerful set of ideas, leading AI researchers will be still uncertain of whether such a thing will take months, years, or decades, and reasonably so. In other words, this accumulation of ideas is likely to explode at some point, but our abstractions (at least economic ones) are not a good enough fit to the problem to say when or how. But the point is that such an explosion of ideas would lead to the hard takeoff scenario.