A stable outcome is possible where such a self-improving AI is unable to form.
The outcome can happen if the “human based” AIs occupy all ecological space within this solar system. That is, there might be humans alive, but all significant resources would be policed by the AIs. Assuming a self-improving AI, no matter how smart, still needs access to matter and energy to grow, then it would not ever be able to gain a foothold.
The real life example is earth’s biosphere : all living things are restricted to a subset of the possible solution space for a similar reason, and have been for several billion years.
These people’s objections are not entirely unfounded. It’s true that there is little evidence the brain exploits QM effects (which is not to say that it is completely certain it does not). However, if you try to pencil in real numbers for the hardware requirements for a whole brain emulation, they are quite absurd. Assumptions differ, but it is possible that to build a computational system with sufficient nodes to emulate all 100 trillion synapses would cost hundreds of billions to over a trillion dollars if you had to use today’s hardware to do it.
The point is : you can simplify people’s arguments to “I’m not worried about the imminent existence of AI because we cannot build the hardware to run one”. The fact that a detail about their argument is wrong doesn’t change the conclusion.