As an additional reason to be suspicious of arguments based on expected utility maximization, VNM expected utility maximizers aren’t embedded agents. Classical expected utility theory treats computations performed at EUMs as having no physical side effects (e.g., energy consumption or waste heat generation), and the hardware that EUMs run on is treated as separate from the world that EUMs maximize utility over. Classical expected utility theory can’t handle scenarios like self-modification, logical uncertainty, or the existence of other copies of the agent in the environment. Idealized EUMs aren’t just unreachable via reinforcement learning, they aren’t physically possible at all. An argument based on expected utility maximization that doesn’t address embedded agency is going to ignore a lot of factors that are relevant to AI alignment.
As an additional reason to be suspicious of arguments based on expected utility maximization, VNM expected utility maximizers aren’t embedded agents. Classical expected utility theory treats computations performed at EUMs as having no physical side effects (e.g., energy consumption or waste heat generation), and the hardware that EUMs run on is treated as separate from the world that EUMs maximize utility over. Classical expected utility theory can’t handle scenarios like self-modification, logical uncertainty, or the existence of other copies of the agent in the environment. Idealized EUMs aren’t just unreachable via reinforcement learning, they aren’t physically possible at all. An argument based on expected utility maximization that doesn’t address embedded agency is going to ignore a lot of factors that are relevant to AI alignment.