What Steven Byrnes said, but also my reading is that 1) in the current paradigm it’s near-damn-impossible to built such an AI without creating an unaligned AI in the process (how else do you gradient-descend your way into a book on aligned AIs?) and 2) if you do make an unaligned AI powerful enough to write such a textbook, it’ll probably proceed to converting the entire mass of the universe into textbooks, or do something similarly incompatible with human life.
What Steven Byrnes said, but also my reading is that 1) in the current paradigm it’s near-damn-impossible to built such an AI without creating an unaligned AI in the process (how else do you gradient-descend your way into a book on aligned AIs?) and 2) if you do make an unaligned AI powerful enough to write such a textbook, it’ll probably proceed to converting the entire mass of the universe into textbooks, or do something similarly incompatible with human life.