Ideally, we’d want some model where the agent may not exist,
So an agent can consider the possibility of dying, or like ‘I have this model, but I don’t actually exist’?
So an agent can consider the possibility of dying, or like ‘I have this model, but I don’t actually exist’?