In fact, I think it’s less dangerous because we at minimum gain more time
As I argued above we have less time, not more, if we know how to point AI.
An AI aimed at something in particular would be much more dangerous for its level than one not aimed at any particular real-world goal, and so “near-human level AGI” would be much safer (and we can keep in the near-human level zone you mention longer) if we can’t point it.
As I argued above we have less time, not more, if we know how to point AI.
An AI aimed at something in particular would be much more dangerous for its level than one not aimed at any particular real-world goal, and so “near-human level AGI” would be much safer (and we can keep in the near-human level zone you mention longer) if we can’t point it.