No, it is definitely not a strictly necessary requirement for near-term survival. To be “strictly necessary for near-term survival”, such future technologies would have to be guaranteed to kill all of humanity, and soon. That’s ridiculous hyperbole.
There are risks ahead, even existential risks, from other non-AI technologies but not to nearly that extent.
We’re very good at generating existential risks. Given indefinite technological progression at our current pace, we are likely to get ourselves killed.
Your post—and my comment—are explicitly about necessary requirements for near-term survival. If you want to make another post about indefinite-term existential risks, then we can talk about that.
No, it is definitely not a strictly necessary requirement for near-term survival. To be “strictly necessary for near-term survival”, such future technologies would have to be guaranteed to kill all of humanity, and soon. That’s ridiculous hyperbole.
There are risks ahead, even existential risks, from other non-AI technologies but not to nearly that extent.
We’re very good at generating existential risks. Given indefinite technological progression at our current pace, we are likely to get ourselves killed.
Your post—and my comment—are explicitly about necessary requirements for near-term survival. If you want to make another post about indefinite-term existential risks, then we can talk about that.