AI could eliminate us in its quest to achieve a finite end, and would not necessarily be concerned with long-term personal survival. For example, if we told an AI to build a trillion paperclips, it might eliminate us in the process then stop at a trillion and shut down.
Humans don’t shut down after achieving a finite goal because we are animated by so many self-editing finite goals that there never is a moment in life where we go “that’s it. I’m done”. It seems to me that general intelligence does not seek a finite, measurable and achievable goal but rather a mode of being of some sorts. If this is true, then perhaps AGI wouldn’t even be possible without the desire to expand, because a desire for expansion may only come with a mode-of-being oriented intelligence rather than a finite reward-oriented intelligence. But I wouldn’t discount the possibility of a very competent narrow AI turning us into a trillion paperclips.
So narrow AI might have a better chance at killing us than AGI. The Great Filter could be misaligned narrow AI. This confirms your thesis.
The kind of misalignment that would have AI kill humanity—the urge for power, safety, and resources—is the same kind that would cause expansion.
AI could eliminate us in its quest to achieve a finite end, and would not necessarily be concerned with long-term personal survival. For example, if we told an AI to build a trillion paperclips, it might eliminate us in the process then stop at a trillion and shut down.
Humans don’t shut down after achieving a finite goal because we are animated by so many self-editing finite goals that there never is a moment in life where we go “that’s it. I’m done”. It seems to me that general intelligence does not seek a finite, measurable and achievable goal but rather a mode of being of some sorts. If this is true, then perhaps AGI wouldn’t even be possible without the desire to expand, because a desire for expansion may only come with a mode-of-being oriented intelligence rather than a finite reward-oriented intelligence. But I wouldn’t discount the possibility of a very competent narrow AI turning us into a trillion paperclips.
So narrow AI might have a better chance at killing us than AGI. The Great Filter could be misaligned narrow AI. This confirms your thesis.