Q4: What probability do you assign to the possibility of an AI with initially (professional) human-level competence at general reasoning (including science, mathematics, engineering and programming) to self-modify its way up to vastly superhuman capabilities within a matter of hours/days/< 5 years?
I am not sure whether anyone thinks that is true. If you look at the claims by E. Yudkowsky they typically say something like:
I think that at some point in the development of Artificial Intelligence, we are likely to see a fast, local increase in capability—“AI go FOOM”.
Yudkowsky appears to be hedging his bets on when this is going to happen—by saying: “at some point”. There’s not much sign of anything like: “initially (professional) human-level competence”.
Does anyone believe such a thing will happen then? At first glance the claim makes little sense: we already know how fast progress goes with agents of “professional human-level competence” knocking around—and it just isn’t that fast.
I am not sure whether anyone thinks that is true. If you look at the claims by E. Yudkowsky they typically say something like:
Yudkowsky appears to be hedging his bets on when this is going to happen—by saying: “at some point”. There’s not much sign of anything like: “initially (professional) human-level competence”.
Does anyone believe such a thing will happen then? At first glance the claim makes little sense: we already know how fast progress goes with agents of “professional human-level competence” knocking around—and it just isn’t that fast.