[Question] Has Eliezer publicly and satisfactorily responded to attempted rebuttals of the analogy to evolution?

I refer to these posts:

https://​​optimists.ai/​​2023/​​11/​​28/​​ai-is-easy-to-control/​​

https://​​www.lesswrong.com/​​posts/​​hvz9qjWyv8cLX9JJR/​​evolution-provides-no-evidence-for-the-sharp-left-turn

https://​​www.lesswrong.com/​​posts/​​CoZhXrhpQxpy9xw9y/​​where-i-agree-and-disagree-with-eliezer

My (poor, maybe mis-) understanding is that the argument is that as SGD optimizes for “predicting the next token” and we select for systems with very low loss by modifying every single parameter in the neural network (which basically defines the network itself), it seems quite unlikely that we’ll have a “sharp left turn” in the near term, which happened because evolution was too weak an outer optimizer to fully “control” humans’ thinking in the direction that most improved inclusive genetic fitness, as it is too weak to directly tinker every neuron connection in our brain.

Given SGD’s vastly stronger ability at outer optimisation of every parameter, isn’t it possible, if not likely, that any sharp left turn occurs only at a vastly superhuman level, if the inner optimizer becomes vastly stronger than SGD?

The above arguments have persuaded me that we might be able to thread the needle for survival if humanity is able to use the not-yet-actively-deceptive outputs of moderately-superhuman models (because they are still just predicting the next token to the best of their capability), to help us solve the potential sharp left turn and if humanity doesn’t do anything else stupid with other training methods/​misuse and manages to solve the other problems. Of course, in an ideal world we wouldn’t be in this situation.

I have read some rebuttals by others on LessWrong but did not find anything that convincingly debunked this idea (maybe I missed something).

Did Eliezer, or anyone else, ever tell us why this is wrong (if it is)? I have been searching for the past week but have only found this: https://​​x.com/​​ESYudkowsky/​​status/​​1726329895121514565 which seemed to be switching to more of a post-training discussion.