In the scenario where ‘all humans fall over dead in an instant’ it is already assumed that such an entity has sufficient competence that it has already secured its independence from humanity. I’m not saying that I think such a scenario seems likely to me, just that it seems incorrect to argue that an agent with that level of capability would be incapable of independently supporting itself. Also, an entity with that level of strategic planning and competence would likely foresee this, and not make such an obvious lethal mistake. I can’t say that for sure though, because AIs so far have very inhuman failure-modes while being narrowly superhuman in certain ways.
However, I do think that unrestricted RSI over a period of something like a year or two could potentially produce something this powerful. Especially if it is working with support from deceived or unwise humans, and able to produce a substantial quantity of custom compute hardware for itself over this period.
In the scenario where ‘all humans fall over dead in an instant’ it is already assumed that such an entity has sufficient competence that it has already secured its independence from humanity. I’m not saying that I think such a scenario seems likely to me, just that it seems incorrect to argue that an agent with that level of capability would be incapable of independently supporting itself. Also, an entity with that level of strategic planning and competence would likely foresee this, and not make such an obvious lethal mistake. I can’t say that for sure though, because AIs so far have very inhuman failure-modes while being narrowly superhuman in certain ways.
I also don’t think it’s very likely that we will go from a barely-able-to-self-improve AGI to a superhumanly powerful one which is independent of humanity and powerful enough to kill all of us with no warning over a time period of a few days. I think @jacob_cannell makes good arguments about why slightly-better-than-current-level tech couldn’t make that kind of leap in just days.
However, I do think that unrestricted RSI over a period of something like a year or two could potentially produce something this powerful. Especially if it is working with support from deceived or unwise humans, and able to produce a substantial quantity of custom compute hardware for itself over this period.