AI agents are designed using an agency abstraction. The notion of an AI “having a utility function” itself only has meaning relative to an agency abstraction. There is no such thing as a “real agent” independent of some concept of agency.
All the agency abstractions I know of permit taking one of some specified set of actions at each time step, which can easily be defined to include the “twitch” action. If you disagree with my claim, you can try formalizing a natural one that doesn’t have this property. (There are trivial ways to restrict the set of actions, but then you could use a utility function to rationalize “twitch if you can, take the lexicographically first action you can otherwise”)
AI agents are designed using an agency abstraction. The notion of an AI “having a utility function” itself only has meaning relative to an agency abstraction. There is no such thing as a “real agent” independent of some concept of agency.
All the agency abstractions I know of permit taking one of some specified set of actions at each time step, which can easily be defined to include the “twitch” action. If you disagree with my claim, you can try formalizing a natural one that doesn’t have this property. (There are trivial ways to restrict the set of actions, but then you could use a utility function to rationalize “twitch if you can, take the lexicographically first action you can otherwise”)