Because our morality is based on our experiential process. We see ourselves as the same person. Because of this, we want to be protected from violence in the future, even if the future person is not “really” the same as the present me.
Why protect one type of “you” over another type? Your response gives a reason that future people are valuable, but not that those future people are more valuable than other future people.
I’m not protecting anyone over anyone else, I’m protecting someone over not-someone. Someone (ie. non-murdered person) is protected, and the outcome that leads to dead person is avoided.
Experientially, we view “me in 10 seconds” as the same as “me now.” Because of this, the traditional arguments hold, at least to the extent that we believe that our impression of continuous living is not just a neat trick of our mind unconnected to reality. And if we don’t believe this, we fail the rationality test in many more severe ways than not understanding morality. (Why would I not jump off buildings, just because future me will die?)
This ignores that insofar as going back in time kills currently existing people it also revives previously existing ones. You’re ignoring the lives created by time travel.
Experientially, we view “me in 10 seconds” as the same as “me now.” Because of this, the traditional arguments hold, at least to the extent that we believe that our impression of continuous living is not just a neat trick of our mind unconnected to reality. And if we don’t believe this, we fail the rationality test in many more severe ways than not understanding morality. (Why would I not jump off buildings, just because future me will die?)
If you’re defending some form of egoism, maybe time travel is wrong. From a utilitarian standpoint, preferring certain people just because of their causal origins makes no sense.
Where did time travel come from? That’s not part of my argument, or the context of the discussion about why murder is wrong; the time travel argument is just point out what non-causality might take the form of. The fact that murder is wrong is a moral judgement, which means it belongs to the realm of human experience.
If the question is whether changing the time stream is morally wrong because it kills people, the supposition is that we live in a non-causal world, which makes all of the arguments useless, since I’m not interested in defining morality in a universe that I have no reason to believe exists.
If you’re not interested in discussing the ethics of time travel, why did you respond to my comment which said
I don’t understand why it’s morally wrong to kill people if they’re all simultaneously replaced with marginally different versions of themselves. Sure, they’ve ceased to exist. But without time traveling, you make it so that none of the marginally different versions exist. It seems like some kind of act omission distinction is creeping into your thought processes about time travel.
with
Because our morality is based on our experiential process. We see ourselves as the same person. Because of this, we want to be protected from violence in the future, even if the future person is not “really” the same as the present me.
It seems pretty clear that I was talking about time travel, and your comment could also be interpreted that way.
I think we need to limit the set of morally relevant future versions to versions that would be created without interference, because otherwise we split ourselves too thinly among speculative futures that almost never happen. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one.
“I think we need to arbitrarily limit something. Given that, this specific limit is not arbitrary.”
How is that not equivalent to your argument?
Additionally, please explain more. I don’t understand what you mean by saying that we “split ourselves too thinly”. What is this splitting and why does it invalidate moral systems that do it? Also, overall, isn’t your argument just a reason that considering alternatives to the status quo isn’t moral?
Well, the phrase “split ourselves too thinly among speculative futures that almost never happen” would seem to refer to the fact that we have limited time and processing capacity to think with.
“Time travel is too improbable to worry about preserving yous affected by it. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one.”
Those two sentences do not connect. They actually contradict.
Also, you’re doing moral epistemology backwards, in my view. You’re basically saying, “it would be really convenient if the content of morality was such that we could easily compute it using limited cognitive resources”. That’s an argumentum ad consequentum which is a logical fallacy.
You’re probably right about it contradicting. Though, about the moral-epistemology bit, I think there may be a sort of anthropic-bias type argument that creatures can only implement a morality that they can practically compute to begin with.
Your argument is that it is hard and impractical, not that it is impossible, and I think that only the latter type is a reasonable constraint on moral considerations, although even then I have some qualms about whether or not nihilism would be more justified, as opposed to arbitrary moral limits. I also don’t understand how anthropic arguments might come into play.
Because our morality is based on our experiential process. We see ourselves as the same person. Because of this, we want to be protected from violence in the future, even if the future person is not “really” the same as the present me.
Why protect one type of “you” over another type? Your response gives a reason that future people are valuable, but not that those future people are more valuable than other future people.
I’m not protecting anyone over anyone else, I’m protecting someone over not-someone. Someone (ie. non-murdered person) is protected, and the outcome that leads to dead person is avoided.
Experientially, we view “me in 10 seconds” as the same as “me now.” Because of this, the traditional arguments hold, at least to the extent that we believe that our impression of continuous living is not just a neat trick of our mind unconnected to reality. And if we don’t believe this, we fail the rationality test in many more severe ways than not understanding morality. (Why would I not jump off buildings, just because future me will die?)
This ignores that insofar as going back in time kills currently existing people it also revives previously existing ones. You’re ignoring the lives created by time travel.
If you’re defending some form of egoism, maybe time travel is wrong. From a utilitarian standpoint, preferring certain people just because of their causal origins makes no sense.
Where did time travel come from? That’s not part of my argument, or the context of the discussion about why murder is wrong; the time travel argument is just point out what non-causality might take the form of. The fact that murder is wrong is a moral judgement, which means it belongs to the realm of human experience.
If the question is whether changing the time stream is morally wrong because it kills people, the supposition is that we live in a non-causal world, which makes all of the arguments useless, since I’m not interested in defining morality in a universe that I have no reason to believe exists.
If you’re not interested in discussing the ethics of time travel, why did you respond to my comment which said
with
It seems pretty clear that I was talking about time travel, and your comment could also be interpreted that way.
But, whatever.
I think we need to limit the set of morally relevant future versions to versions that would be created without interference, because otherwise we split ourselves too thinly among speculative futures that almost never happen. Given that, it makes sense to want to protect the existence of the unmodified future self over the modified one.
“I think we need to arbitrarily limit something. Given that, this specific limit is not arbitrary.”
How is that not equivalent to your argument?
Additionally, please explain more. I don’t understand what you mean by saying that we “split ourselves too thinly”. What is this splitting and why does it invalidate moral systems that do it? Also, overall, isn’t your argument just a reason that considering alternatives to the status quo isn’t moral?
Well, the phrase “split ourselves too thinly among speculative futures that almost never happen” would seem to refer to the fact that we have limited time and processing capacity to think with.
I think it summarizes to “time travel is too improbable and unpredictable to worry about [preserving the interests of yous affected by it]”.
Your argument makes no sense.
Those two sentences do not connect. They actually contradict.
Also, you’re doing moral epistemology backwards, in my view. You’re basically saying, “it would be really convenient if the content of morality was such that we could easily compute it using limited cognitive resources”. That’s an argumentum ad consequentum which is a logical fallacy.
You’re probably right about it contradicting. Though, about the moral-epistemology bit, I think there may be a sort of anthropic-bias type argument that creatures can only implement a morality that they can practically compute to begin with.
Your argument is that it is hard and impractical, not that it is impossible, and I think that only the latter type is a reasonable constraint on moral considerations, although even then I have some qualms about whether or not nihilism would be more justified, as opposed to arbitrary moral limits. I also don’t understand how anthropic arguments might come into play.