If you prefer, take the same hypothetical but with me on Mars, choosing whether she stayed alive on Earth; or let choice B include subjecting her to an awful fate rather than death.
As I said, it’s still going to be about your experience during the moments until your memory is erased.
I understand what you’re arguing against: the notion that what we actually execute matches a rational consequentialist calculus of our conscious ideals.
I took that as a given, actually. ;-) What I’m really arguing against is the naive self-applied mind projection fallacy that causes people to see themselves as decision-making agents—i.e., beings with “souls”, if you will. Asserting that your preferences are “about” the territory is the same sort of error as saying that the thermostat “wants” it to be a certain temperature. The “wanting” is not in the thermostat, it’s in the thermostat’s maker.
Of course, it makes for convenient language to say it wants, but we should not confuse this with thinking the thermostat can really “want” anything but for its input and setting to match. And the same goes for humans.
(This is not a mere fine point of tautological philosophy; human preferences in general suffer from high degrees of subgoal stomp, chaotic loops, and other undesirable consequences arising as a direct result of this erroneous projection. Understanding the actual nature of preferences makes it easier to dissolve these confusions.)
As I said, it’s still going to be about your experience during the moments until your memory is erased.
I took that as a given, actually. ;-) What I’m really arguing against is the naive self-applied mind projection fallacy that causes people to see themselves as decision-making agents—i.e., beings with “souls”, if you will. Asserting that your preferences are “about” the territory is the same sort of error as saying that the thermostat “wants” it to be a certain temperature. The “wanting” is not in the thermostat, it’s in the thermostat’s maker.
Of course, it makes for convenient language to say it wants, but we should not confuse this with thinking the thermostat can really “want” anything but for its input and setting to match. And the same goes for humans.
(This is not a mere fine point of tautological philosophy; human preferences in general suffer from high degrees of subgoal stomp, chaotic loops, and other undesirable consequences arising as a direct result of this erroneous projection. Understanding the actual nature of preferences makes it easier to dissolve these confusions.)