My first reaction is that if I consider the person as t+1 to be someone different, these are the reactions that make sense:
a) selfish behavior, including selfishness against the future me. For example, when I am in the shop, I would take the most tasty thing and start eating it, because I want some pleasure now, and I don’t care about the future person getting in trouble.
b) altruist behavior, but considering the future me completely equal to future anyone-else. For example, I would donate all my money to someone else if I thought they need it just a little more than me, because I am simply choosing between two strangers.
c) some mix of the former two behaviors.
The important thing here is that the last option doesn’t add up to normality. My current behavior is partially selfish and partially altruist, but is not a linear combination of the first two options: both of them care about the future me exactly the same as about the future someone-else; but my current behavior doesn’t.
A possible way to fix this is to assume that I care equally about future-anyone intrinsically, but I care more about future-me instrumentally. What I do now has larger impact on what my future-me will do than what future someone-else will do; especially because by “doing” in this context I also mean things like “adopting beliefs” etc. Simply said: I am thousand times more efficient at programming the future-me than programming future someone-else, so my paths to create more utility in the future naturally mostly go through my future-self. --- However, this whole paragraph smells like a rationalization for a given bottom line.
For me, the obvious answer is b. This is the answer for all forms of consequentialism which treat all people symmetrically e.g. utilitarianism. However, you can adapt the “personal identity isn’t real” viewpoint and still prefer people who are similar to yourself (e.g. future you).
My first reaction is that if I consider the person as t+1 to be someone different, these are the reactions that make sense:
a) selfish behavior, including selfishness against the future me. For example, when I am in the shop, I would take the most tasty thing and start eating it, because I want some pleasure now, and I don’t care about the future person getting in trouble.
b) altruist behavior, but considering the future me completely equal to future anyone-else. For example, I would donate all my money to someone else if I thought they need it just a little more than me, because I am simply choosing between two strangers.
c) some mix of the former two behaviors.
The important thing here is that the last option doesn’t add up to normality. My current behavior is partially selfish and partially altruist, but is not a linear combination of the first two options: both of them care about the future me exactly the same as about the future someone-else; but my current behavior doesn’t.
A possible way to fix this is to assume that I care equally about future-anyone intrinsically, but I care more about future-me instrumentally. What I do now has larger impact on what my future-me will do than what future someone-else will do; especially because by “doing” in this context I also mean things like “adopting beliefs” etc. Simply said: I am thousand times more efficient at programming the future-me than programming future someone-else, so my paths to create more utility in the future naturally mostly go through my future-self. --- However, this whole paragraph smells like a rationalization for a given bottom line.
For me, the obvious answer is b. This is the answer for all forms of consequentialism which treat all people symmetrically e.g. utilitarianism. However, you can adapt the “personal identity isn’t real” viewpoint and still prefer people who are similar to yourself (e.g. future you).