A stable outcome is possible where such a self-improving AI is unable to form.
The outcome can happen if the “human based” AIs occupy all ecological space within this solar system. That is, there might be humans alive, but all significant resources would be policed by the AIs. Assuming a self-improving AI, no matter how smart, still needs access to matter and energy to grow, then it would not ever be able to gain a foothold.
The real life example is earth’s biosphere : all living things are restricted to a subset of the possible solution space for a similar reason, and have been for several billion years.
A stable outcome is possible where such a self-improving AI is unable to form.
The outcome can happen if the “human based” AIs occupy all ecological space within this solar system. That is, there might be humans alive, but all significant resources would be policed by the AIs. Assuming a self-improving AI, no matter how smart, still needs access to matter and energy to grow, then it would not ever be able to gain a foothold.
The real life example is earth’s biosphere : all living things are restricted to a subset of the possible solution space for a similar reason, and have been for several billion years.