I think the simple argument “building minds vastly smarter than our own seems dangerous” is in fact pretty compelling, and seems relatively easy to realize beforehand, as e.g. Turing and many others did. Personally, there are not any technical facts about current ML systems which update me more overall either way about our likelihood of survival than this simple argument does.
And I see little reason why they should—technical details of current AI systems strike me as around as relevant to predicting whether future, vastly more intelligent systems will care about us as do e.g. technical details about neuronal firing in beetles about whether a given modern government will care about us. Certainly modern governments wouldn’t exist if neurons hadn’t evolved, and I expect one could in fact probably gather some information relevant to predicting them by studying beetle neurons; maybe even a lot, in principle. It just seems a rather inefficient approach, given how distant the object of study is from the relevant question.
I think the word “technical” is a red herring here. If someone tells me a flood is coming, I don’t much care how much they know about hydrodynamics, even if in principle this knowledge might allow me to model the threat with more confidence. Rather, I care about things like e.g. how sure they are about the direction from which the flood is coming, about the topography of our surroundings, etc. Personally, I expect I’d be much more inclined to make large/confident updates on the basis of information at levels of abstraction like these, than at levels about e.g. hydrodynamics or particle physics or so forth, however much more “technical,” or related-in-principle in some abstract reductionist sense, the latter may be.
I do think there are also many arguments beyond this simple one which clearly justify additional (and more confident) concern. But I try to assess such arguments based on how compelling they are, where “technical precision” is one, but hardly the only factor which might influence this; e.g., another is whether the argument even involves the relevant level of abstraction, or bears on the question at hand.