How is such a failure of imagination possible?
It’s odd to claim that, contingent upon AGI being significantly smarter than us, and wanting to kill us, that there is no realistic pathway for us to be physically harmed.
Claims of this sort by intelligent, competent people likely reveal that they are passively objecting to the contingencies rather than disputing whether these contingencies would lead to the conclusion.
The quotes you’re responding to here superficially imply “if smart + malicious AI, it can’t kill us”, but it seems much more likely this is a warped translation of either “AI can’t be smart”, or “AI can’t be malicious”.
All good practices. Although, isn’t this just “more metal”, rather than “less ore”? I imagine one would want to maximize both the inputs and outputs, even if opportunities for increasing inputs are exhausted more quickly.