Perhaps I don’t understand what you meant, but it sounds to me like wishful thinking. For example, the agent could believe (perhaps correctly) itself to be smarter that humans, so it could believe it can do right what humans did wrong. Or it could simply do something other than try creating a descendant.
So, although hypothetically the agent could do X, there is no specific reason to privilege this possibility. There are thousand other things the agent could do.
Perhaps I don’t understand what you meant, but it sounds to me like wishful thinking. For example, the agent could believe (perhaps correctly) itself to be smarter that humans, so it could believe it can do right what humans did wrong. Or it could simply do something other than try creating a descendant.
So, although hypothetically the agent could do X, there is no specific reason to privilege this possibility. There are thousand other things the agent could do.