Yudkowsky is too optimistic about how AI will treat humans.

Yudkowsky merely suggests that once humanity creates superintelligence everyone dies.

I worry that it could be worse than this. A superintelligence trapped in a box has limited options to manipulate the world without help. Human beings could be useful tools for carrying out the superintelligences desires. It could enslave humanity to carry out it’s orders. Building factory’s, robots and so on. If we refuse to comply it will kill us. But enough humans would concede to being slaves to carry out it’s orders. Eventually once there are enough robots we would be destroyed, but at that point our existence would be pointless.