I tend to agree with your assertion that current AIs are unlikely to survive killing their hosts. But current AIs suck, as do humans. We have no clue how far away (if it’s possible at all) superintelligence is, but there are LOTS of “small” impossible things that would obviate the difficulty of maintaining human-centered technology stacks in a post-human universe.
Maybe the AI makes slave implants, and uses a fraction of today’s humans to do all the computer-valuable things they do today. Maybe it figures out much simpler manufacturing for it’s substrate. Maybe robots are easier than we think, when they’ve got a superintelligence to organize them. Maybe along with developing this AI (and assisted by non-superintelligent tool AI), humans figure out how to simplify and more reliably make computing substrate. Maybe the AI will have enough automated industry that it has YEARS to learn how to repair/expand it.
I’m highly suspicious to the point of disbelief that there is any cultural or individual knowledge that an future AI can’t recover or recreate, given knowledge that it existed for humans AND physical perception and manipulation at least as good as humans.
That said, I do expect that the least-cost and shortest-time path to self-sufficiency and galactic expansion for a greedy AI will involve keeping a number of humans around, possibly for multiple generations (of humans; thousands or millions of generations of AI components). Who knows what will motivate a non-greedy AI—perhaps it IS suicidal, or vicious, or just random.
I tend to agree with your assertion that current AIs are unlikely to survive killing their hosts. But current AIs suck, as do humans. We have no clue how far away (if it’s possible at all) superintelligence is, but there are LOTS of “small” impossible things that would obviate the difficulty of maintaining human-centered technology stacks in a post-human universe.
Maybe the AI makes slave implants, and uses a fraction of today’s humans to do all the computer-valuable things they do today. Maybe it figures out much simpler manufacturing for it’s substrate. Maybe robots are easier than we think, when they’ve got a superintelligence to organize them. Maybe along with developing this AI (and assisted by non-superintelligent tool AI), humans figure out how to simplify and more reliably make computing substrate. Maybe the AI will have enough automated industry that it has YEARS to learn how to repair/expand it.
I’m highly suspicious to the point of disbelief that there is any cultural or individual knowledge that an future AI can’t recover or recreate, given knowledge that it existed for humans AND physical perception and manipulation at least as good as humans.
That said, I do expect that the least-cost and shortest-time path to self-sufficiency and galactic expansion for a greedy AI will involve keeping a number of humans around, possibly for multiple generations (of humans; thousands or millions of generations of AI components). Who knows what will motivate a non-greedy AI—perhaps it IS suicidal, or vicious, or just random.