Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.
Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means—maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.
Occasionally her friend Bob talks to her about her strange theory of identity.
“Don’t you ever wish you had left yourself more of your paycheck?” he once asked.
“I can’t remember any of me ever thinking that.” Alice replied. “I guess it’d be nice, but I might as well wish yesterday’s Bill Gates had sent me his paycheck.”
Another time, Bob posed the question, “Right now, you allocate yourself enough to survive with the (true) justification that that’s a good investment of your funds. But what if that ever ceases to be true?”
Alice resopnded, “When me’s have made their allocations, they haven’t felt any particular fondness for their successors. I know that’s hard to believe from your perspective, but it was years after past me’s started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices’ decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.
So me’s really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won’t make it. She won’t feel it as a grand sacrifice, either. Last week’s Alice didn’t have to exert willpower when she cut the food budget based on new nutritional evidence.”
“Look,” Bob said on a third occasion, “your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity.”
“I’m not a perfect altruist, and becoming one wouldn’t be any easier for me than it would be for you,” Alice replied. “And I know the arguments against the uninterrupted-consciousness theory of identity, and they’re definitely correct. But I don’t alieve a word of it.”
“Have you actually tried to internalize them?”
“No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU’s published average for people of similar intelligence, conscientiousness, and other relevant traits.”
“Hmm,” said Bob. “I don’t want to make allegations about your motives-”
“You don’t have to,” Alice interrupted. “The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There’s status quo bias, there’s the desire not to admit I’m wrong, and there’s the fact that I’ve come to identify with my theory of identity.
I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don’t alieve those gains will be mine, so they don’t motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?”
.
.
If you wish to ponder Alice’s position with relative objectivity before I link it to something less esoteric, please do so before continuing.
.
.
.
Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn’t sign up for cryonics. He didn’t buy any of the usual counterarguments—when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn’t motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn’t alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.
So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice’s answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob’s survival and resurrection.
He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to ‘correct’ himself?
Does Carrie have anything to say about this argument?
On Irrational Theory of Identity
Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.
Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means—maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.
Occasionally her friend Bob talks to her about her strange theory of identity.
“Don’t you ever wish you had left yourself more of your paycheck?” he once asked.
“I can’t remember any of me ever thinking that.” Alice replied. “I guess it’d be nice, but I might as well wish yesterday’s Bill Gates had sent me his paycheck.”
Another time, Bob posed the question, “Right now, you allocate yourself enough to survive with the (true) justification that that’s a good investment of your funds. But what if that ever ceases to be true?”
Alice resopnded, “When me’s have made their allocations, they haven’t felt any particular fondness for their successors. I know that’s hard to believe from your perspective, but it was years after past me’s started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices’ decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.
So me’s really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won’t make it. She won’t feel it as a grand sacrifice, either. Last week’s Alice didn’t have to exert willpower when she cut the food budget based on new nutritional evidence.”
“Look,” Bob said on a third occasion, “your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity.”
“I’m not a perfect altruist, and becoming one wouldn’t be any easier for me than it would be for you,” Alice replied. “And I know the arguments against the uninterrupted-consciousness theory of identity, and they’re definitely correct. But I don’t alieve a word of it.”
“Have you actually tried to internalize them?”
“No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU’s published average for people of similar intelligence, conscientiousness, and other relevant traits.”
“Hmm,” said Bob. “I don’t want to make allegations about your motives-”
“You don’t have to,” Alice interrupted. “The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There’s status quo bias, there’s the desire not to admit I’m wrong, and there’s the fact that I’ve come to identify with my theory of identity.
I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don’t alieve those gains will be mine, so they don’t motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?”
.
.
If you wish to ponder Alice’s position with relative objectivity before I link it to something less esoteric, please do so before continuing.
.
.
.
Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn’t sign up for cryonics. He didn’t buy any of the usual counterarguments—when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn’t motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn’t alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.
So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice’s answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob’s survival and resurrection.
He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to ‘correct’ himself?
Does Carrie have anything to say about this argument?