I just don’t see that AGI implies self-improvement beyond learning what it can while staying in scope of its resources. You’d have to deliberately implement such an intention.
It suggests that open-ended goal-directed systems will tend to improve themselves—and to grab resources to help them fulfill their goals—even if their goals are superficially rather innocent-looking and make no mention of any such thing.
The paper starts out like this:
AIs will want to self-improve—One kind of action a system can take is to alter either its own software or its own physical structure. Some of these changes would be very damaging to the system and cause it to no longer meet its goals. But some changes would enable it to reach its goals more effectively over its entire future. Because they last forever, these kinds of self-changes can provide huge benefits to a system. Systems will therefore be highly motivated to discover them and to make them happen. If they do not have good models of themselves, they will be strongly motivated to create them though learning and study. Thus almost all AIs will have drives towards both greater self-knowledge and self-improvement.
The usual cite given in this area is the paper The Basic AI Drives.
It suggests that open-ended goal-directed systems will tend to improve themselves—and to grab resources to help them fulfill their goals—even if their goals are superficially rather innocent-looking and make no mention of any such thing.
The paper starts out like this: