In hindsight, I phrased that poorly, and you’re right, discussing it that way would probably be unproductive.
First, let me specify that when I say “histories” here I mean past histories from the point of view of the agent (which sounds weird, but a lot of the other comments use it to refer to future histories as well). With that in mind, how about this: the actions of the set of agents who care about histories are indistinguishable from the actions of some subset of the agents who do not care about histories. In (something closer to) English, there’s a way to describe your caring about histories in terms of only caring about the present and future without changing any decisions you might make.
I find the above “obvious” (which I usually take as a sign that I should be careful). The reason I believe it is that all information you have about histories is contained within your present self. There is no access to the past—everything you know about it is contained either in the present or future, so your decisions must necessarily be conditional only on the present and future.
Would you agree with that? And if so, would you agree that discussing an agent who cares about the histories leading up the present state is not worth doing, since there is no case in which her decisions would differ from some agent who does not? (I suppose one fairly reasonable objection is time travel, but I’m more interested in the case where it’s impossible, and I’m not entirely sure whether it would change the core of the argument anyway.)
There is no access to the past—everything you know about it is contained either in the present or future
That’s fair, but it just seems to show that I can be fooled. If I’m fooled and the trick is forever beyond my capacity to detect, my actions will be the same as if I had actually accomplished whatever I was trying for. But that doesn’t mean I got what I really wanted.
In hindsight, I phrased that poorly, and you’re right, discussing it that way would probably be unproductive.
First, let me specify that when I say “histories” here I mean past histories from the point of view of the agent (which sounds weird, but a lot of the other comments use it to refer to future histories as well). With that in mind, how about this: the actions of the set of agents who care about histories are indistinguishable from the actions of some subset of the agents who do not care about histories. In (something closer to) English, there’s a way to describe your caring about histories in terms of only caring about the present and future without changing any decisions you might make.
I find the above “obvious” (which I usually take as a sign that I should be careful). The reason I believe it is that all information you have about histories is contained within your present self. There is no access to the past—everything you know about it is contained either in the present or future, so your decisions must necessarily be conditional only on the present and future.
Would you agree with that? And if so, would you agree that discussing an agent who cares about the histories leading up the present state is not worth doing, since there is no case in which her decisions would differ from some agent who does not? (I suppose one fairly reasonable objection is time travel, but I’m more interested in the case where it’s impossible, and I’m not entirely sure whether it would change the core of the argument anyway.)
That’s fair, but it just seems to show that I can be fooled. If I’m fooled and the trick is forever beyond my capacity to detect, my actions will be the same as if I had actually accomplished whatever I was trying for. But that doesn’t mean I got what I really wanted.