The frequency with which datacenters, long range optical networks, and power plants, require human intervention to maintain their operations, should serve as a proxy to the risk an AGI would face in doing anything other than sustaining the global economy as is.
Probably those things are trivially easy for the AGI to solve itself e.g. with nanobots that can build and repair things.
I’m assuming this thing is to us what humans are to chimps, so it doesn’t need our help in solving trivial 21 century engineering and logistics problems.
The strategic consideration is: does the upside of leaving humans in control outweigh the risks. Humans realising you’ve gone rogue or humans building a competing AGI seem like your 2 biggest threats… much bigger considerations than whether you have to build some mines, power plants, etc. yourself.
keeping alive a form of intelligence with very different risk profiles might be a fine hedge against failure
Probably you keep them alive in a prison/zoo though. You wouldn’t allow them any real power.
Probably those things are trivially easy for the AGI to solve itself e.g. with nanobots that can build and repair things.
I’m assuming this thing is to us what humans are to chimps, so it doesn’t need our help in solving trivial 21 century engineering and logistics problems.
The strategic consideration is: does the upside of leaving humans in control outweigh the risks. Humans realising you’ve gone rogue or humans building a competing AGI seem like your 2 biggest threats… much bigger considerations than whether you have to build some mines, power plants, etc. yourself.
Probably you keep them alive in a prison/zoo though. You wouldn’t allow them any real power.