I think the kind of AI likely to take over the world can be described closely enough in such a way. Certainly for the kind of aligned AI that saves the world, it seems likely to me that expected utility is sufficient to think about how it thinks about its impact on the world.
What observations are backing this belief? Have you seen approaches that share some key characteristics with expected utility maximization approaches which have worked in real-world situations, and where you expect that the characteristics that made it work in the situation you observed will transfer? If so, would you be willing to elaborate?
On the flip side, are there any observations you could make in the future that would convince you that expected utility maximization will not be a good model to describe the kind of AI likely to take over the world?
What observations are backing this belief? Have you seen approaches that share some key characteristics with expected utility maximization approaches which have worked in real-world situations, and where you expect that the characteristics that made it work in the situation you observed will transfer? If so, would you be willing to elaborate?
On the flip side, are there any observations you could make in the future that would convince you that expected utility maximization will not be a good model to describe the kind of AI likely to take over the world?