I guess he was talking about the kind of precision more specific to AI, which goes like “compared to the superintelligence space, the Friendly AI space is a tiny dot.
But that stance makes assumptions that he does not share, as he does not believe that AGI will become uncontrollable.
But that stance makes assumptions that he does not share, as he does not believe that AGI will become uncontrollable.