My personal take is that everything you wrote in this post is correct, and expected utility maximisers are neither the real threat, nor a great model for thinking about dangerous AI. Thanks for writing this up!
My personal take is that everything you wrote in this post is correct, and expected utility maximisers are neither the real threat, nor a great model for thinking about dangerous AI. Thanks for writing this up!