But the thing that’s more robust is that the sub-taking-over-world AI is already really important, and receiving huge amounts of investment, as something that automates the R&D process. And it seems like the best guess given what we know now is that this process starts years before the singularity.
Reading this, I thought about asymptotes and trajectories and universality and stuff. I wonder if there’s a disagreement here about unbounded vs. bounded trajectories of these AIs that automate R&D (like, Eliezer thinks they’ll have asymptotes that top out at sub-human intelligence, until they’re not and have asymptotes that are superintelligent, whereas Paul thinks they’ll have trajectories that are just continuously steeper and steeper but always unbounded after some point well before superintelligence).
Reading this, I thought about asymptotes and trajectories and universality and stuff. I wonder if there’s a disagreement here about unbounded vs. bounded trajectories of these AIs that automate R&D (like, Eliezer thinks they’ll have asymptotes that top out at sub-human intelligence, until they’re not and have asymptotes that are superintelligent, whereas Paul thinks they’ll have trajectories that are just continuously steeper and steeper but always unbounded after some point well before superintelligence).