You’re arguing against a strawman. Nobody actually believes that their present self and future self are identical.
At the same time, there is more than similarity that ties me to my future. The changes that happen from here to there aren’t purposeless, random changes. They are caused by experiences that have the power to make me voluntarily change my mind. In a way, my future self is an improved version of my present self.
In general, arguments that tell someone what to care about are tricky. You might be dealing with a fundamental value, over which it is useless to argue: you might as well try to convert Clippy to using staples. Even if the value you are questioning is a derived one, you have to correctly identify what it derives from. If Clippy believes in cooperation with humans, your task is first to understand why: does it think that humans are sufficiently interested in paperclips that helping humans is currently the best way to create more paperclips? Only then can you argue that Clippy should care about something else: maybe if squirrels ruled the earth, they would care about paperclips more.
I think you’re (trivially) right that my future self is different from me. I think you’re wrong that I care about my future self primarily because we’re the same.
You’re arguing against a strawman. Nobody actually believes that their present self and future self are identical.
They don’t, but then what do they mean by I am still the same person? I think the problem is that most people are confused about this and I attempted to clear it up.
At the same time, there is more than similarity that ties me to my future. The changes that happen from here to there aren’t purposeless, random changes. They are caused by experiences that have the power to make me voluntarily change my mind. In a way, my future self is an improved version of my present self.
Sure your future self could be an improved version of your present self or it could be a worse version. The point however is that on what basis do invest your efforts now in the present towards benefiting a particular group of people? mainly these new versions of yourself? My claim is that it’s not a good reason to prefer those people simply because they are future versions of yourself, as far as I am concerned they are just other people. yes they are similar to you as you are now but why is that relevant?
In general, arguments that tell someone what to care about are tricky. You might be dealing with a fundamental value, over which it is useless to argue: you might as well try to convert Clippy to using staples. Even if the value you are questioning is a derived one, you have to correctly identify what it derives from. If Clippy believes in cooperation with humans, your task is first to understand why: does it think that humans are sufficiently interested in paperclips that helping humans is currently the best way to create more paperclips? Only then can you argue that Clippy should care about something else: maybe if squirrels ruled the earth, they would care about paperclips more.
I didn’t try to argue in favor of a particular fundamental value someone might hold, whatever fundamental value you might have what I wanted to point out is how does that fundamental value legitimize giving preferential concern to a certain group? Someone might claim that their fundamental value is a concern for their future copies in time. And there’s nothing I can say against that, except ask why? why would someone care about a particular group of people because they are a particular group of people? as opposed for example to just caring about well-being general or some other fundamental value.
I think you’re (trivially) right that my future self is different from me. I think you’re wrong that I care about my future self primarily because we’re the same.
this may be. do you accept though that because your future self is different then it isn’t you as you are now? And if so how do you justify caring about those people? I honestly would like to know.
Draw a circle around your entire series of selves, and call that “me”. I think this fits the human notion of identity a lot better than trying to draw a circle about the infinitesimally-existing “present self”. Treating this “me” as a single entity, it’s suddenly very clear how it could act to benefit itself (again, unlike the “present self”, which can’t do anything to change its state). I think this is a much better first approximation of selfish motivations.
Now you can ask whether this this entity should pursue those selfish motivations, or help other such entities instead. All of a sudden we are asking a real question. This is a good sign! Altruism is complicated. If we make a definition that makes us go “What is this thing you call love” in a deep alien voice, maybe that’s not the best approach to take. Because sure, our perspective can be flawed. But I can’t imagine getting an answer to a problem from a perspective that can’t even understand the problem.
You’re arguing against a strawman. Nobody actually believes that their present self and future self are identical.
At the same time, there is more than similarity that ties me to my future. The changes that happen from here to there aren’t purposeless, random changes. They are caused by experiences that have the power to make me voluntarily change my mind. In a way, my future self is an improved version of my present self.
In general, arguments that tell someone what to care about are tricky. You might be dealing with a fundamental value, over which it is useless to argue: you might as well try to convert Clippy to using staples. Even if the value you are questioning is a derived one, you have to correctly identify what it derives from. If Clippy believes in cooperation with humans, your task is first to understand why: does it think that humans are sufficiently interested in paperclips that helping humans is currently the best way to create more paperclips? Only then can you argue that Clippy should care about something else: maybe if squirrels ruled the earth, they would care about paperclips more.
I think you’re (trivially) right that my future self is different from me. I think you’re wrong that I care about my future self primarily because we’re the same.
Thanks for your reply
They don’t, but then what do they mean by I am still the same person? I think the problem is that most people are confused about this and I attempted to clear it up.
Sure your future self could be an improved version of your present self or it could be a worse version. The point however is that on what basis do invest your efforts now in the present towards benefiting a particular group of people? mainly these new versions of yourself? My claim is that it’s not a good reason to prefer those people simply because they are future versions of yourself, as far as I am concerned they are just other people. yes they are similar to you as you are now but why is that relevant?
I didn’t try to argue in favor of a particular fundamental value someone might hold, whatever fundamental value you might have what I wanted to point out is how does that fundamental value legitimize giving preferential concern to a certain group? Someone might claim that their fundamental value is a concern for their future copies in time. And there’s nothing I can say against that, except ask why? why would someone care about a particular group of people because they are a particular group of people? as opposed for example to just caring about well-being general or some other fundamental value.
this may be. do you accept though that because your future self is different then it isn’t you as you are now? And if so how do you justify caring about those people? I honestly would like to know.
Draw a circle around your entire series of selves, and call that “me”. I think this fits the human notion of identity a lot better than trying to draw a circle about the infinitesimally-existing “present self”. Treating this “me” as a single entity, it’s suddenly very clear how it could act to benefit itself (again, unlike the “present self”, which can’t do anything to change its state). I think this is a much better first approximation of selfish motivations.
Now you can ask whether this this entity should pursue those selfish motivations, or help other such entities instead. All of a sudden we are asking a real question. This is a good sign! Altruism is complicated. If we make a definition that makes us go “What is this thing you call love” in a deep alien voice, maybe that’s not the best approach to take. Because sure, our perspective can be flawed. But I can’t imagine getting an answer to a problem from a perspective that can’t even understand the problem.