The AGI would rather write programs to do the grunt work, than employ humans, as they can be more reliable, controllable, etc. It could create such agents by looking into its own source code and copying / modifying it. If it doesn’t have this capability it will spend time researching (could be years) until it does. On a thousand-year timescale it isn’t clear why an AGI would need us for anything besides say, specimens for experiments.
Also as reallyeli says, having a single misaligned agent with absolute control of our future seems terrible no matter what the agent does.
Reply by acylhalide on the EA Forum: