I don’t understand this question. The best time for the emergence of a great optimizer would be shortly after you were born (earlier if your existence were assured somehow).
If an AI is a friendly optimizer, then you want it as soon as possible. If it is randomly friendly or unfriendly, then you don’t want it at all (the quandary we all face). Seems like asking “when” is a lot less relevant than asking “what”. “What” I want is a friendly AI. “When” I get it is of little relevance, so long as it is long enough before my death to grant me “immortality” while maximally (or sufficiently) fulfilling my values.
Sometime after the Singularity. We already have AI that surpasses humans in several areas of human endeavor, such as chess and trivia. What do you define as “human level”? The AIs we have now are like extremely autistic savants, exceptional in some areas where most people are deficient, but deficient to the point of not even trying in the thousands of others. Eventually, there will (in theory) be AIs that are like that with most aspects of human existence, yet remain far inferior in others, and perhaps shortly after that point is reached, AIs will surpass humans in everything.
Trying to predict “when” seems like trying to predict which snowflake will trigger an avalanche. I really don’t think it can be done without a time machine or an already operational superintelligent AI to do the analysis for us, but the snow seems to be piling up pretty fast.