It would also likely not have the intention to increase its intelligence infinitely anyway. I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention.
Well, some older posts had a guy praising “goal system zero”, which meant a plan to program an AI with the minimum goals it needs to function as a ‘rational’ optimization process and no more. I’ll quote his list directly:
(1) Increasing the security and the robustness of the goal-implementing process. This will probably entail the creation of machines which leave Earth at a large fraction of the speed of light in all directions and the creation of the ability to perform vast computations.
(2) Refining the model of reality available to the goal-implementing process. Physics and cosmology are the two disciplines most essential to our current best model of reality. Let us call this activity “physical research”.
(End of list.)
This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn’t lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.
Well, some older posts had a guy praising “goal system zero”, which meant a plan to program an AI with the minimum goals it needs to function as a ‘rational’ optimization process and no more. I’ll quote his list directly:
This seems plausible to me as a set of necessary conditions. It also logically implies the intention to convert all matter the AI doesn’t lay aside for other purposes (of which it has none, here) into computronium and research equipment. Unless humans for some reason make incredibly good research equipment, the zero AI would thus plan to kill us all. This would also imply some level of emulation as an initial instrumental goal. Note that sub-goal (1) implies a desire not to let instrumental goals like simulated empathy get in the way of our demise.