Hmm, so it is even more troubling, when eventually it does not end well, but initially it may seem like everything is fine.
To me that gives one more reason to why we should start experimenting with autonomous, unpredictable intelligent entities as soon as possible, and see if arrangements other than master-slave are possible.
Responding to your last sentence: one thing I see as a cornerstone of biomimetic AI architectures I propose is the non-fungibility of digital minds. By being hardware-bound, humans could have an array of fail-safes to actually shut such systems down (in addition to other very important benefits like reduced copy-ability and recursive self-improvement).
In one way, of course this will not prevent covert influence and power accumulation etc. but one can argue such things are already quite prevalent in human society. So if the human-AI equilibrium stabilizes in AIs being extremely influential yet “overthrowable” if they obviously overstep, then I think this could be acceptable.