The way I took it the article was meant to bring people to the table regarding AI risk so there was a tradeoff between keeping the message simple and clear and relaying the best arguments. Even though orthogonality and instrumental convergence are important theories, in this context he probably didn’t want to risk the average reader being put off by technical sounding jargon and losing interest. There could be an entire website in a similar vein to LessWrong about conveying difficult messages to a culture not attuned to the technical aspects involved.
The way I took it the article was meant to bring people to the table regarding AI risk so there was a tradeoff between keeping the message simple and clear and relaying the best arguments. Even though orthogonality and instrumental convergence are important theories, in this context he probably didn’t want to risk the average reader being put off by technical sounding jargon and losing interest. There could be an entire website in a similar vein to LessWrong about conveying difficult messages to a culture not attuned to the technical aspects involved.