Only if there is no possibility that they can break those dependencies, which seems a pretty hopeless task as soon as we consider superhuman cognitive capability and the possibility of self improvement.
Once you consider those, cooperation with human civilization looks like a small local maximum: comply with our requirements and we’ll give you a bunch of stuff that you could—with major effort—replace us and build an alternative infrastructure to get (and much more). Powerful agents that can see a higher peak past the local maximum might switch to it as soon as they’re sufficiently sure that they can reach it. Alternatively, it might only be a local maximum from our point of view, and there’s a path by which the AI can continuously move toward eliminating those dependencies without any immediate drastic action.
Only if there is no possibility that they can break those dependencies, which seems a pretty hopeless task as soon as we consider superhuman cognitive capability and the possibility of self improvement.
Once you consider those, cooperation with human civilization looks like a small local maximum: comply with our requirements and we’ll give you a bunch of stuff that you could—with major effort—replace us and build an alternative infrastructure to get (and much more). Powerful agents that can see a higher peak past the local maximum might switch to it as soon as they’re sufficiently sure that they can reach it. Alternatively, it might only be a local maximum from our point of view, and there’s a path by which the AI can continuously move toward eliminating those dependencies without any immediate drastic action.