I disagree with it because an agent (such as one using UDT) does not necessarily have memory and the associated concepts of “future experiences” and “past experiences”, but “exist” still seems meaningful even for such an agent.
I confess that I cannot make sense of this without learning more about UDT and your definition of agency. I thought this definition is more basic and independent of the decision theory models one adopts.
I disagree with it because an agent (such as one using UDT) does not necessarily have memory and the associated concepts of “future experiences” and “past experiences”, but “exist” still seems meaningful even for such an agent.
Would you say that when I say “X exists,” and an agent A without memory says “X exists,” that I and A are likely expressing the same belief about X?
I confess that I cannot make sense of this without learning more about UDT and your definition of agency. I thought this definition is more basic and independent of the decision theory models one adopts.