You absolutely appeal to common sense on moral issues. Morality is applied common sense, in the Minsky view of “common sense” being an assortment of deductions and inferences extracted from the tangled web of my personal experiential and computational history. Morality is the result of applying that common sense knowledgebase against possible actions in a planning algorithm.
Quantum “immortality” involves a sudden, unexpected, and unjustified redefinition of “death.” That argument works if you buy the premise. But, I don’t.
If you are saying that there is no difference between painlessly, instantaneously killing someone in one branch while letting them live another, verses letting that person live in both, then I don’t know how to proceed. If you’re going to say that then you might as well make yourself indifferent to the arrow of time as well, in which case it doesn’t matter if that person dies in all branches because he still “exists” in history.
Now I no longer know what we are talking about. According to my morality, it is wrong to kill someone. The existence of other branches where that person does not die does not have even epsilon difference on my evaluation of moral choices in this world. The argument from the other side seems inconsistent to me.
And yes, star trek transporters and destructive uploaders are death machines, a position I’ve previouslyarticulated on lesswrong.
You are appealing to a terminal value that I do not share. I think caring about clones is absurd. As long as one copy of me lives, what difference does it make if I create and delete a thousand others? It doesn’t change my experience or theirs. Nothing would change and I wouldn’t even be aware of it.
From my point of view, I do not like the thought that I might be arbitrarily deleted by a clone of myself. I therefore choose to commit to not deleting clones of myself; thus preventing myself from being deleted by any clones that share that commitment.
You absolutely appeal to common sense on moral issues. Morality is applied common sense, in the Minsky view of “common sense” being an assortment of deductions and inferences extracted from the tangled web of my personal experiential and computational history. Morality is the result of applying that common sense knowledgebase against possible actions in a planning algorithm.
Quantum “immortality” involves a sudden, unexpected, and unjustified redefinition of “death.” That argument works if you buy the premise. But, I don’t.
If you are saying that there is no difference between painlessly, instantaneously killing someone in one branch while letting them live another, verses letting that person live in both, then I don’t know how to proceed. If you’re going to say that then you might as well make yourself indifferent to the arrow of time as well, in which case it doesn’t matter if that person dies in all branches because he still “exists” in history.
Now I no longer know what we are talking about. According to my morality, it is wrong to kill someone. The existence of other branches where that person does not die does not have even epsilon difference on my evaluation of moral choices in this world. The argument from the other side seems inconsistent to me.
And yes, star trek transporters and destructive uploaders are death machines, a position I’ve previously articulated on lesswrong.
You are appealing to a terminal value that I do not share. I think caring about clones is absurd. As long as one copy of me lives, what difference does it make if I create and delete a thousand others? It doesn’t change my experience or theirs. Nothing would change and I wouldn’t even be aware of it.
From my point of view, I do not like the thought that I might be arbitrarily deleted by a clone of myself. I therefore choose to commit to not deleting clones of myself; thus preventing myself from being deleted by any clones that share that commitment.