what if we use a algorith on agi to get it to always want to rely on humans for energy and resources ?
what if we use a algorith on agi to get it to stop after a certain amount of time something we ask it to do ?
then we would have to say continue then it would .
what if we use a algorith on agi to get it not to try manipulate us with what it has been given ?
what if we use a algorith on agi to get it to only use the resources we gave it?
what if we use a algorith on agi to get it to give us the pros and cons of what it is about to do?
what if we use a algorith on agi to get it to always ask for permission to do something new before it does it ?
what if we use a algorith on agi to get it to stop when we say stop ?
what if we did all these things to one agi ?
would it be possible to use a algorithm on a agi to shut it down then after some time and also perform goals it is doing without hurting and killing people and taking away their autonomy and not to look for loop holes to continue doing goals? why would it try to stop the algorithm from shutting it off if it is built into it ?