what we have in history—it is hackable minds which were misused to make holocaust. Probably this could be one possibility to improve writings about AI danger.
But to answer question 1) - it is too wide topic! (social hackability is only one possibility of AI superpower takeoff path)
For example still miss (and probably will miss) in book:
a) How to prepare psychological trainings for human-AI communication. (or for reading this book :P )
b) AI Impact to religion
etc.
This is also one of points where I dont agree with Bostrom’s (fantastic!) book.
We could use analogy from history: human-animal = soldier+hourse didnt need the physical iterface (like in Avatar movie) and still added awesome military advance.
Something similar we could get from better weak AI tools. (probably with better GUI - but it is not only about GUI)
“Tools” dont need to have big general intelligence. They could be at hourse level:
their incredible power of analyse big structure (big memory buffer)
speed of “rider” using quick “computation” with “tether” at your hands