For example, given that in outcomes X, Y, and Z, I end up with $100, I still prefer outcome X, where I earned it, to Y, where it was a gift, to Z, where I stole it. We can complicate the examples to cover any other differences that you might (wrongly) suppose explain my preference without regard to history.
I really really doubt you have preferences for history. Your preferences are fully summarised by the current world state with no reference to histories—you prefer not remembering having stolen something and prefer remembering having earned it and having others remember the same. Note that this is a description of the present, not of the past.
To really care about a history, you’d have to construct a scenario like “I start out in a simulation along the lines of Z, but then the simulation is rearranged to a worldstate with X instead. Alternatively, I can be in scenario X all along. I like being in state X (at a point in time after the rearrangement/lack thereof) less in the former case than in the latter, even if no one can tell the difference between them.” And I’m not sure that scenario would even work (it’s not clear that there is meaningful continuity between Z!you and X!you), but I can’t think of a better one off-hand.
Those with simpler theories of the good life often doubt the self-knowledge of those with more complex ones. There isn’t much I can do to try to convince you, other than throw thought experiments back and forth, and I don’t feel up to that. If you’ve already read EY on the complexity of value, my only thought here is that maybe some other LWers will chime in and reduce (or increase!) your posterior probability that I’m just a sloppy thinker.
In hindsight, I phrased that poorly, and you’re right, discussing it that way would probably be unproductive.
First, let me specify that when I say “histories” here I mean past histories from the point of view of the agent (which sounds weird, but a lot of the other comments use it to refer to future histories as well). With that in mind, how about this: the actions of the set of agents who care about histories are indistinguishable from the actions of some subset of the agents who do not care about histories. In (something closer to) English, there’s a way to describe your caring about histories in terms of only caring about the present and future without changing any decisions you might make.
I find the above “obvious” (which I usually take as a sign that I should be careful). The reason I believe it is that all information you have about histories is contained within your present self. There is no access to the past—everything you know about it is contained either in the present or future, so your decisions must necessarily be conditional only on the present and future.
Would you agree with that? And if so, would you agree that discussing an agent who cares about the histories leading up the present state is not worth doing, since there is no case in which her decisions would differ from some agent who does not? (I suppose one fairly reasonable objection is time travel, but I’m more interested in the case where it’s impossible, and I’m not entirely sure whether it would change the core of the argument anyway.)
There is no access to the past—everything you know about it is contained either in the present or future
That’s fair, but it just seems to show that I can be fooled. If I’m fooled and the trick is forever beyond my capacity to detect, my actions will be the same as if I had actually accomplished whatever I was trying for. But that doesn’t mean I got what I really wanted.
I really really doubt you have preferences for history. Your preferences are fully summarised by the current world state with no reference to histories—you prefer not remembering having stolen something and prefer remembering having earned it and having others remember the same. Note that this is a description of the present, not of the past.
To really care about a history, you’d have to construct a scenario like “I start out in a simulation along the lines of Z, but then the simulation is rearranged to a worldstate with X instead. Alternatively, I can be in scenario X all along. I like being in state X (at a point in time after the rearrangement/lack thereof) less in the former case than in the latter, even if no one can tell the difference between them.” And I’m not sure that scenario would even work (it’s not clear that there is meaningful continuity between Z!you and X!you), but I can’t think of a better one off-hand.
Those with simpler theories of the good life often doubt the self-knowledge of those with more complex ones. There isn’t much I can do to try to convince you, other than throw thought experiments back and forth, and I don’t feel up to that. If you’ve already read EY on the complexity of value, my only thought here is that maybe some other LWers will chime in and reduce (or increase!) your posterior probability that I’m just a sloppy thinker.
In hindsight, I phrased that poorly, and you’re right, discussing it that way would probably be unproductive.
First, let me specify that when I say “histories” here I mean past histories from the point of view of the agent (which sounds weird, but a lot of the other comments use it to refer to future histories as well). With that in mind, how about this: the actions of the set of agents who care about histories are indistinguishable from the actions of some subset of the agents who do not care about histories. In (something closer to) English, there’s a way to describe your caring about histories in terms of only caring about the present and future without changing any decisions you might make.
I find the above “obvious” (which I usually take as a sign that I should be careful). The reason I believe it is that all information you have about histories is contained within your present self. There is no access to the past—everything you know about it is contained either in the present or future, so your decisions must necessarily be conditional only on the present and future.
Would you agree with that? And if so, would you agree that discussing an agent who cares about the histories leading up the present state is not worth doing, since there is no case in which her decisions would differ from some agent who does not? (I suppose one fairly reasonable objection is time travel, but I’m more interested in the case where it’s impossible, and I’m not entirely sure whether it would change the core of the argument anyway.)
That’s fair, but it just seems to show that I can be fooled. If I’m fooled and the trick is forever beyond my capacity to detect, my actions will be the same as if I had actually accomplished whatever I was trying for. But that doesn’t mean I got what I really wanted.