When I seen circulating notion of serious AI danger, without details, I guess I assumed it originated from better/more relevant arguments.
What I see instead is arguments of general difficulty of some aspects of AI (such as real world motivation) crafted as to suggest update of unlikelihood of only “friendly AI that genuinely cares about mankind” but not the general unlikelihood of real world motivation on AI, because the one making the arguments tells you that you should update the former but doesn’t tell about the latter.
This is combined with some theoretical notion of “rationality” that would work for updates on a complete inference graph, but which is about as rational on an incomplete inference graph such as the one above, as it is true that due to law of inertia after walking off the top of 10 story building, you’ll just keep on walking.
When I seen circulating notion of serious AI danger, without details, I guess I assumed it originated from better/more relevant arguments.
What I see instead is arguments of general difficulty of some aspects of AI (such as real world motivation) crafted as to suggest update of unlikelihood of only “friendly AI that genuinely cares about mankind” but not the general unlikelihood of real world motivation on AI, because the one making the arguments tells you that you should update the former but doesn’t tell about the latter.
This is combined with some theoretical notion of “rationality” that would work for updates on a complete inference graph, but which is about as rational on an incomplete inference graph such as the one above, as it is true that due to law of inertia after walking off the top of 10 story building, you’ll just keep on walking.