Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.
In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.
In general, it is far easier to destroy than to create.
So I wouldn’t dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.
Deep Blue is far, far from being AGI, and is not a conceivable threat to the future of humanity, but its success suggests that implementation of combat strategy within a domain of imaginable possibilities is a far easier problem than AGI.
In combat, speed, both of getting a projectile or an attacking column to its destination, and speed of sizing up a situation so that strategies can be determined, just might be the most important advantage of all, and speed is the most trivial thing in AI.
In general, it is far easier to destroy than to create.
So I wouldn’t dismiss an A-(not-so)G-I as a threat because it is poor at music composition, or true deep empathy(!), or even something potentially useful like biology or chemistry; i.e. it could be quite specialized, achieving a tiny fraction of the totality of AGI and still be quite a competent threat, capable of causing a singularity that is (merely) destructive.