so basically, we could make an AI that wants to maximize a variable called Utility
Oh, maybe this is the confusion. It’s not a variable called Utility. It’s the actual true goal of the agent. We call it “utility” when analyzing decisions, and VNM-rational agents act as if they have a utility function over states of the world, but it doesn’t have to be external or programmable.
I’d taken your pseudocode as a shorthand for “design the rational agent such that what it wants is …”. It’s not literally a variable, nor a simple piece of code that non-simple code could change.
Oh, maybe this is the confusion. It’s not a variable called Utility. It’s the actual true goal of the agent. We call it “utility” when analyzing decisions, and VNM-rational agents act as if they have a utility function over states of the world, but it doesn’t have to be external or programmable.
I’d taken your pseudocode as a shorthand for “design the rational agent such that what it wants is …”. It’s not literally a variable, nor a simple piece of code that non-simple code could change.