I think this post is great, I’ll probably reference it next time I’m arguing with someone about AI risk. It’s a good summary of the standard argument and does a good job of describing the main cruxes and how they fit into the argument. I’d happily argue for 1,2,3,4 and 6, and I think my disagreements with most people can be framed as disagreements about these points. I agree that if any of these are wrong, there isn’t much reason to be worried about AI takeover, as far as I can see.
One pet peeve of mine is when people call something an assumption, even though in that context it’s a conclusion. Just because you think the argument was insufficient to support it, doesn’t make it an assumption. E.g. In the second last paragraph:
The argument, rather, tends to move quickly from abstract properties like “goal-directedness,” “coherence,” and “consequentialism,” to an invocation of “instrumental convergence,” to the assumption that of course the rational strategy for the AI will be to try to take over the world.
There’s something wrong with the footnotes. [17] is incomplete and [17-19] are never referenced in the text.
I think this post is great, I’ll probably reference it next time I’m arguing with someone about AI risk. It’s a good summary of the standard argument and does a good job of describing the main cruxes and how they fit into the argument. I’d happily argue for 1,2,3,4 and 6, and I think my disagreements with most people can be framed as disagreements about these points. I agree that if any of these are wrong, there isn’t much reason to be worried about AI takeover, as far as I can see.
One pet peeve of mine is when people call something an assumption, even though in that context it’s a conclusion. Just because you think the argument was insufficient to support it, doesn’t make it an assumption. E.g. In the second last paragraph:
There’s something wrong with the footnotes. [17] is incomplete and [17-19] are never referenced in the text.