In particular, there may be a very real danger that humans will use their intelligence to design more intelligent systems, which in turn will design more intelligent systems, etc., leading to a highly unpredictable singularity with no adequate safety measures in place.
In other words: if AGI is a real threat then so is human intelligence, since it’s human intelligence that would take the initial steps towards AGI. What’s potentially different about AGI is that it might become dangerous much faster than anything we’re used to.
(Of course there are other more obvious examples of disastrous things humans might do: nuclear war, etc.)
Maybe humans are not safe AGI. Maybe both the idea of “safety” and the idea of “general intelligence” are ill-defined.
Humans definitely aren’t safe GI (not A, anyway).
In particular, there may be a very real danger that humans will use their intelligence to design more intelligent systems, which in turn will design more intelligent systems, etc., leading to a highly unpredictable singularity with no adequate safety measures in place.
In other words: if AGI is a real threat then so is human intelligence, since it’s human intelligence that would take the initial steps towards AGI. What’s potentially different about AGI is that it might become dangerous much faster than anything we’re used to.
(Of course there are other more obvious examples of disastrous things humans might do: nuclear war, etc.)