What you do have is a valid argument against complete (or almost complete) extinction in the short to medium term. However, not many people believe in that argument , although EY does.
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.”
What you do have is a valid argument against complete (or almost complete) extinction in the short to medium term. However, not many people believe in that argument , although EY does.
(As per
(https://www.lesswrong.com/posts/WLvboc66rBCNHwtRi/ai-27-portents-of-gemini?commentId=Tp6Q5SYsvfMDFxRDu)[previous discussions], no one is able to name the “many researchers” other than himself and his associates).
What you don’t have is an argument against the wider Doom scenarios.